Nov 23 20:00:53 localhost kernel: Linux version 5.14.0-639.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025
Nov 23 20:00:53 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 23 20:00:53 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 23 20:00:53 localhost kernel: BIOS-provided physical RAM map:
Nov 23 20:00:53 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 23 20:00:53 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 23 20:00:53 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 23 20:00:53 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 23 20:00:53 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 23 20:00:53 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 23 20:00:53 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 23 20:00:53 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 23 20:00:53 localhost kernel: NX (Execute Disable) protection: active
Nov 23 20:00:53 localhost kernel: APIC: Static calls initialized
Nov 23 20:00:53 localhost kernel: SMBIOS 2.8 present.
Nov 23 20:00:53 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 23 20:00:53 localhost kernel: Hypervisor detected: KVM
Nov 23 20:00:53 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 23 20:00:53 localhost kernel: kvm-clock: using sched offset of 8332881758 cycles
Nov 23 20:00:53 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 23 20:00:53 localhost kernel: tsc: Detected 2799.998 MHz processor
Nov 23 20:00:53 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 23 20:00:53 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 23 20:00:53 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 23 20:00:53 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 23 20:00:53 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 23 20:00:53 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 23 20:00:53 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 23 20:00:53 localhost kernel: Using GB pages for direct mapping
Nov 23 20:00:53 localhost kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 23 20:00:53 localhost kernel: ACPI: Early table checksum verification disabled
Nov 23 20:00:53 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 23 20:00:53 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 23 20:00:53 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 23 20:00:53 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 23 20:00:53 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 23 20:00:53 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 23 20:00:53 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 23 20:00:53 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 23 20:00:53 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 23 20:00:53 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 23 20:00:53 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 23 20:00:53 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 23 20:00:53 localhost kernel: No NUMA configuration found
Nov 23 20:00:53 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 23 20:00:53 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Nov 23 20:00:53 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 23 20:00:53 localhost kernel: Zone ranges:
Nov 23 20:00:53 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 23 20:00:53 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 23 20:00:53 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 23 20:00:53 localhost kernel:   Device   empty
Nov 23 20:00:53 localhost kernel: Movable zone start for each node
Nov 23 20:00:53 localhost kernel: Early memory node ranges
Nov 23 20:00:53 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 23 20:00:53 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 23 20:00:53 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 23 20:00:53 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 23 20:00:53 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 23 20:00:53 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 23 20:00:53 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 23 20:00:53 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Nov 23 20:00:53 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 23 20:00:53 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 23 20:00:53 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 23 20:00:53 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 23 20:00:53 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 23 20:00:53 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 23 20:00:53 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 23 20:00:53 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 23 20:00:53 localhost kernel: TSC deadline timer available
Nov 23 20:00:53 localhost kernel: CPU topo: Max. logical packages:   8
Nov 23 20:00:53 localhost kernel: CPU topo: Max. logical dies:       8
Nov 23 20:00:53 localhost kernel: CPU topo: Max. dies per package:   1
Nov 23 20:00:53 localhost kernel: CPU topo: Max. threads per core:   1
Nov 23 20:00:53 localhost kernel: CPU topo: Num. cores per package:     1
Nov 23 20:00:53 localhost kernel: CPU topo: Num. threads per package:   1
Nov 23 20:00:53 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 23 20:00:53 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 23 20:00:53 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 23 20:00:53 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 23 20:00:53 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 23 20:00:53 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 23 20:00:53 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 23 20:00:53 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 23 20:00:53 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 23 20:00:53 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 23 20:00:53 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 23 20:00:53 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 23 20:00:53 localhost kernel: Booting paravirtualized kernel on KVM
Nov 23 20:00:53 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 23 20:00:53 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 23 20:00:53 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 23 20:00:53 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Nov 23 20:00:53 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Nov 23 20:00:53 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 23 20:00:53 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 23 20:00:53 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64", will be passed to user space.
Nov 23 20:00:53 localhost kernel: random: crng init done
Nov 23 20:00:53 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 23 20:00:53 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 23 20:00:53 localhost kernel: Fallback order for Node 0: 0 
Nov 23 20:00:53 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 23 20:00:53 localhost kernel: Policy zone: Normal
Nov 23 20:00:53 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 23 20:00:53 localhost kernel: software IO TLB: area num 8.
Nov 23 20:00:53 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 23 20:00:53 localhost kernel: ftrace: allocating 49298 entries in 193 pages
Nov 23 20:00:53 localhost kernel: ftrace: allocated 193 pages with 3 groups
Nov 23 20:00:53 localhost kernel: Dynamic Preempt: voluntary
Nov 23 20:00:53 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 23 20:00:53 localhost kernel: rcu:         RCU event tracing is enabled.
Nov 23 20:00:53 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 23 20:00:53 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Nov 23 20:00:53 localhost kernel:         Rude variant of Tasks RCU enabled.
Nov 23 20:00:53 localhost kernel:         Tracing variant of Tasks RCU enabled.
Nov 23 20:00:53 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 23 20:00:53 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 23 20:00:53 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 23 20:00:53 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 23 20:00:53 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 23 20:00:53 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 23 20:00:53 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 23 20:00:53 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 23 20:00:53 localhost kernel: Console: colour VGA+ 80x25
Nov 23 20:00:53 localhost kernel: printk: console [ttyS0] enabled
Nov 23 20:00:53 localhost kernel: ACPI: Core revision 20230331
Nov 23 20:00:53 localhost kernel: APIC: Switch to symmetric I/O mode setup
Nov 23 20:00:53 localhost kernel: x2apic enabled
Nov 23 20:00:53 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Nov 23 20:00:53 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 23 20:00:53 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 23 20:00:53 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 23 20:00:53 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 23 20:00:53 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 23 20:00:53 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 23 20:00:53 localhost kernel: Spectre V2 : Mitigation: Retpolines
Nov 23 20:00:53 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 23 20:00:53 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 23 20:00:53 localhost kernel: RETBleed: Mitigation: untrained return thunk
Nov 23 20:00:53 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 23 20:00:53 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 23 20:00:53 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 23 20:00:53 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 23 20:00:53 localhost kernel: x86/bugs: return thunk changed
Nov 23 20:00:53 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 23 20:00:53 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 23 20:00:53 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 23 20:00:53 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 23 20:00:53 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 23 20:00:53 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 23 20:00:53 localhost kernel: Freeing SMP alternatives memory: 40K
Nov 23 20:00:53 localhost kernel: pid_max: default: 32768 minimum: 301
Nov 23 20:00:53 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 23 20:00:53 localhost kernel: landlock: Up and running.
Nov 23 20:00:53 localhost kernel: Yama: becoming mindful.
Nov 23 20:00:53 localhost kernel: SELinux:  Initializing.
Nov 23 20:00:53 localhost kernel: LSM support for eBPF active
Nov 23 20:00:53 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 23 20:00:53 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 23 20:00:53 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 23 20:00:53 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 23 20:00:53 localhost kernel: ... version:                0
Nov 23 20:00:53 localhost kernel: ... bit width:              48
Nov 23 20:00:53 localhost kernel: ... generic registers:      6
Nov 23 20:00:53 localhost kernel: ... value mask:             0000ffffffffffff
Nov 23 20:00:53 localhost kernel: ... max period:             00007fffffffffff
Nov 23 20:00:53 localhost kernel: ... fixed-purpose events:   0
Nov 23 20:00:53 localhost kernel: ... event mask:             000000000000003f
Nov 23 20:00:53 localhost kernel: signal: max sigframe size: 1776
Nov 23 20:00:53 localhost kernel: rcu: Hierarchical SRCU implementation.
Nov 23 20:00:53 localhost kernel: rcu:         Max phase no-delay instances is 400.
Nov 23 20:00:53 localhost kernel: smp: Bringing up secondary CPUs ...
Nov 23 20:00:53 localhost kernel: smpboot: x86: Booting SMP configuration:
Nov 23 20:00:53 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 23 20:00:53 localhost kernel: smp: Brought up 1 node, 8 CPUs
Nov 23 20:00:53 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 23 20:00:53 localhost kernel: node 0 deferred pages initialised in 11ms
Nov 23 20:00:53 localhost kernel: Memory: 7765864K/8388068K available (16384K kernel code, 5786K rwdata, 13900K rodata, 4188K init, 7176K bss, 616268K reserved, 0K cma-reserved)
Nov 23 20:00:53 localhost kernel: devtmpfs: initialized
Nov 23 20:00:53 localhost kernel: x86/mm: Memory block size: 128MB
Nov 23 20:00:53 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 23 20:00:53 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 23 20:00:53 localhost kernel: pinctrl core: initialized pinctrl subsystem
Nov 23 20:00:53 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 23 20:00:53 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 23 20:00:53 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 23 20:00:53 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 23 20:00:53 localhost kernel: audit: initializing netlink subsys (disabled)
Nov 23 20:00:53 localhost kernel: audit: type=2000 audit(1763928051.511:1): state=initialized audit_enabled=0 res=1
Nov 23 20:00:53 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 23 20:00:53 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 23 20:00:53 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 23 20:00:53 localhost kernel: cpuidle: using governor menu
Nov 23 20:00:53 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 23 20:00:53 localhost kernel: PCI: Using configuration type 1 for base access
Nov 23 20:00:53 localhost kernel: PCI: Using configuration type 1 for extended access
Nov 23 20:00:53 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 23 20:00:53 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 23 20:00:53 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 23 20:00:53 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 23 20:00:53 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 23 20:00:53 localhost kernel: Demotion targets for Node 0: null
Nov 23 20:00:53 localhost kernel: cryptd: max_cpu_qlen set to 1000
Nov 23 20:00:53 localhost kernel: ACPI: Added _OSI(Module Device)
Nov 23 20:00:53 localhost kernel: ACPI: Added _OSI(Processor Device)
Nov 23 20:00:53 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 23 20:00:53 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 23 20:00:53 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 23 20:00:53 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 23 20:00:53 localhost kernel: ACPI: Interpreter enabled
Nov 23 20:00:53 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 23 20:00:53 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Nov 23 20:00:53 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 23 20:00:53 localhost kernel: PCI: Using E820 reservations for host bridge windows
Nov 23 20:00:53 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 23 20:00:53 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 23 20:00:53 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [3] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [4] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [5] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [6] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [7] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [8] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [9] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [10] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [11] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [12] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [13] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [14] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [15] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [16] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [17] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [18] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [19] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [20] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [21] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [22] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [23] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [24] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [25] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [26] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [27] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [28] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [29] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [30] registered
Nov 23 20:00:53 localhost kernel: acpiphp: Slot [31] registered
Nov 23 20:00:53 localhost kernel: PCI host bridge to bus 0000:00
Nov 23 20:00:53 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 23 20:00:53 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 23 20:00:53 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 23 20:00:53 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 23 20:00:53 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 23 20:00:53 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 23 20:00:53 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 23 20:00:53 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 23 20:00:53 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 23 20:00:53 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 23 20:00:53 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 23 20:00:53 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 23 20:00:53 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 23 20:00:53 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 23 20:00:53 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 23 20:00:53 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 23 20:00:53 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 23 20:00:53 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 23 20:00:53 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 23 20:00:53 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 23 20:00:53 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 23 20:00:53 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 23 20:00:53 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 23 20:00:53 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 23 20:00:53 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 23 20:00:53 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 23 20:00:53 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 23 20:00:53 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 23 20:00:53 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 23 20:00:53 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 23 20:00:53 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 23 20:00:53 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 23 20:00:53 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 23 20:00:53 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 23 20:00:53 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 23 20:00:53 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 23 20:00:53 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 23 20:00:53 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 23 20:00:53 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 23 20:00:53 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 23 20:00:53 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 23 20:00:53 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 23 20:00:53 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 23 20:00:53 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 23 20:00:53 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 23 20:00:53 localhost kernel: iommu: Default domain type: Translated
Nov 23 20:00:53 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 23 20:00:53 localhost kernel: SCSI subsystem initialized
Nov 23 20:00:53 localhost kernel: ACPI: bus type USB registered
Nov 23 20:00:53 localhost kernel: usbcore: registered new interface driver usbfs
Nov 23 20:00:53 localhost kernel: usbcore: registered new interface driver hub
Nov 23 20:00:53 localhost kernel: usbcore: registered new device driver usb
Nov 23 20:00:53 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 23 20:00:53 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 23 20:00:53 localhost kernel: PTP clock support registered
Nov 23 20:00:53 localhost kernel: EDAC MC: Ver: 3.0.0
Nov 23 20:00:53 localhost kernel: NetLabel: Initializing
Nov 23 20:00:53 localhost kernel: NetLabel:  domain hash size = 128
Nov 23 20:00:53 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 23 20:00:53 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Nov 23 20:00:53 localhost kernel: PCI: Using ACPI for IRQ routing
Nov 23 20:00:53 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Nov 23 20:00:53 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 23 20:00:53 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Nov 23 20:00:53 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 23 20:00:53 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 23 20:00:53 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 23 20:00:53 localhost kernel: vgaarb: loaded
Nov 23 20:00:53 localhost kernel: clocksource: Switched to clocksource kvm-clock
Nov 23 20:00:53 localhost kernel: VFS: Disk quotas dquot_6.6.0
Nov 23 20:00:53 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 23 20:00:53 localhost kernel: pnp: PnP ACPI init
Nov 23 20:00:53 localhost kernel: pnp 00:03: [dma 2]
Nov 23 20:00:53 localhost kernel: pnp: PnP ACPI: found 5 devices
Nov 23 20:00:53 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 23 20:00:53 localhost kernel: NET: Registered PF_INET protocol family
Nov 23 20:00:53 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 23 20:00:53 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 23 20:00:53 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 23 20:00:53 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 23 20:00:53 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 23 20:00:53 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 23 20:00:53 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 23 20:00:53 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 23 20:00:53 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 23 20:00:53 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 23 20:00:53 localhost kernel: NET: Registered PF_XDP protocol family
Nov 23 20:00:53 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 23 20:00:53 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 23 20:00:53 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 23 20:00:53 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 23 20:00:53 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 23 20:00:53 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 23 20:00:53 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 23 20:00:53 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 23 20:00:53 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 72036 usecs
Nov 23 20:00:53 localhost kernel: PCI: CLS 0 bytes, default 64
Nov 23 20:00:53 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 23 20:00:53 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 23 20:00:53 localhost kernel: ACPI: bus type thunderbolt registered
Nov 23 20:00:53 localhost kernel: Trying to unpack rootfs image as initramfs...
Nov 23 20:00:53 localhost kernel: Initialise system trusted keyrings
Nov 23 20:00:53 localhost kernel: Key type blacklist registered
Nov 23 20:00:53 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 23 20:00:53 localhost kernel: zbud: loaded
Nov 23 20:00:53 localhost kernel: integrity: Platform Keyring initialized
Nov 23 20:00:53 localhost kernel: integrity: Machine keyring initialized
Nov 23 20:00:53 localhost kernel: Freeing initrd memory: 85868K
Nov 23 20:00:53 localhost kernel: NET: Registered PF_ALG protocol family
Nov 23 20:00:53 localhost kernel: xor: automatically using best checksumming function   avx       
Nov 23 20:00:53 localhost kernel: Key type asymmetric registered
Nov 23 20:00:53 localhost kernel: Asymmetric key parser 'x509' registered
Nov 23 20:00:53 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 23 20:00:53 localhost kernel: io scheduler mq-deadline registered
Nov 23 20:00:53 localhost kernel: io scheduler kyber registered
Nov 23 20:00:53 localhost kernel: io scheduler bfq registered
Nov 23 20:00:53 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 23 20:00:53 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 23 20:00:53 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 23 20:00:53 localhost kernel: ACPI: button: Power Button [PWRF]
Nov 23 20:00:53 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 23 20:00:53 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 23 20:00:53 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 23 20:00:53 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 23 20:00:53 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 23 20:00:53 localhost kernel: Non-volatile memory driver v1.3
Nov 23 20:00:53 localhost kernel: rdac: device handler registered
Nov 23 20:00:53 localhost kernel: hp_sw: device handler registered
Nov 23 20:00:53 localhost kernel: emc: device handler registered
Nov 23 20:00:53 localhost kernel: alua: device handler registered
Nov 23 20:00:53 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 23 20:00:53 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 23 20:00:53 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 23 20:00:53 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 23 20:00:53 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 23 20:00:53 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 23 20:00:53 localhost kernel: usb usb1: Product: UHCI Host Controller
Nov 23 20:00:53 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-639.el9.x86_64 uhci_hcd
Nov 23 20:00:53 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 23 20:00:53 localhost kernel: hub 1-0:1.0: USB hub found
Nov 23 20:00:53 localhost kernel: hub 1-0:1.0: 2 ports detected
Nov 23 20:00:53 localhost kernel: usbcore: registered new interface driver usbserial_generic
Nov 23 20:00:53 localhost kernel: usbserial: USB Serial support registered for generic
Nov 23 20:00:53 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 23 20:00:53 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 23 20:00:53 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 23 20:00:53 localhost kernel: mousedev: PS/2 mouse device common for all mice
Nov 23 20:00:53 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 23 20:00:53 localhost kernel: rtc_cmos 00:04: registered as rtc0
Nov 23 20:00:53 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-11-23T20:00:52 UTC (1763928052)
Nov 23 20:00:53 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 23 20:00:53 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 23 20:00:53 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 23 20:00:53 localhost kernel: usbcore: registered new interface driver usbhid
Nov 23 20:00:53 localhost kernel: usbhid: USB HID core driver
Nov 23 20:00:53 localhost kernel: drop_monitor: Initializing network drop monitor service
Nov 23 20:00:53 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 23 20:00:53 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 23 20:00:53 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 23 20:00:53 localhost kernel: Initializing XFRM netlink socket
Nov 23 20:00:53 localhost kernel: NET: Registered PF_INET6 protocol family
Nov 23 20:00:53 localhost kernel: Segment Routing with IPv6
Nov 23 20:00:53 localhost kernel: NET: Registered PF_PACKET protocol family
Nov 23 20:00:53 localhost kernel: mpls_gso: MPLS GSO support
Nov 23 20:00:53 localhost kernel: IPI shorthand broadcast: enabled
Nov 23 20:00:53 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Nov 23 20:00:53 localhost kernel: AES CTR mode by8 optimization enabled
Nov 23 20:00:53 localhost kernel: sched_clock: Marking stable (1166015658, 158927723)->(1401774591, -76831210)
Nov 23 20:00:53 localhost kernel: registered taskstats version 1
Nov 23 20:00:53 localhost kernel: Loading compiled-in X.509 certificates
Nov 23 20:00:53 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 23 20:00:53 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 23 20:00:53 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 23 20:00:53 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 23 20:00:53 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 23 20:00:53 localhost kernel: Demotion targets for Node 0: null
Nov 23 20:00:53 localhost kernel: page_owner is disabled
Nov 23 20:00:53 localhost kernel: Key type .fscrypt registered
Nov 23 20:00:53 localhost kernel: Key type fscrypt-provisioning registered
Nov 23 20:00:53 localhost kernel: Key type big_key registered
Nov 23 20:00:53 localhost kernel: Key type encrypted registered
Nov 23 20:00:53 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 23 20:00:53 localhost kernel: Loading compiled-in module X.509 certificates
Nov 23 20:00:53 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 23 20:00:53 localhost kernel: ima: Allocated hash algorithm: sha256
Nov 23 20:00:53 localhost kernel: ima: No architecture policies found
Nov 23 20:00:53 localhost kernel: evm: Initialising EVM extended attributes:
Nov 23 20:00:53 localhost kernel: evm: security.selinux
Nov 23 20:00:53 localhost kernel: evm: security.SMACK64 (disabled)
Nov 23 20:00:53 localhost kernel: evm: security.SMACK64EXEC (disabled)
Nov 23 20:00:53 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 23 20:00:53 localhost kernel: evm: security.SMACK64MMAP (disabled)
Nov 23 20:00:53 localhost kernel: evm: security.apparmor (disabled)
Nov 23 20:00:53 localhost kernel: evm: security.ima
Nov 23 20:00:53 localhost kernel: evm: security.capability
Nov 23 20:00:53 localhost kernel: evm: HMAC attrs: 0x1
Nov 23 20:00:53 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 23 20:00:53 localhost kernel: Running certificate verification RSA selftest
Nov 23 20:00:53 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 23 20:00:53 localhost kernel: Running certificate verification ECDSA selftest
Nov 23 20:00:53 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 23 20:00:53 localhost kernel: clk: Disabling unused clocks
Nov 23 20:00:53 localhost kernel: Freeing unused decrypted memory: 2028K
Nov 23 20:00:53 localhost kernel: Freeing unused kernel image (initmem) memory: 4188K
Nov 23 20:00:53 localhost kernel: Write protecting the kernel read-only data: 30720k
Nov 23 20:00:53 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 23 20:00:53 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 23 20:00:53 localhost kernel: Run /init as init process
Nov 23 20:00:53 localhost kernel:   with arguments:
Nov 23 20:00:53 localhost kernel:     /init
Nov 23 20:00:53 localhost kernel:   with environment:
Nov 23 20:00:53 localhost kernel:     HOME=/
Nov 23 20:00:53 localhost kernel:     TERM=linux
Nov 23 20:00:53 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64
Nov 23 20:00:53 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 23 20:00:53 localhost systemd[1]: Detected virtualization kvm.
Nov 23 20:00:53 localhost systemd[1]: Detected architecture x86-64.
Nov 23 20:00:53 localhost systemd[1]: Running in initrd.
Nov 23 20:00:53 localhost systemd[1]: No hostname configured, using default hostname.
Nov 23 20:00:53 localhost systemd[1]: Hostname set to <localhost>.
Nov 23 20:00:53 localhost systemd[1]: Initializing machine ID from VM UUID.
Nov 23 20:00:53 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 23 20:00:53 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 23 20:00:53 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Nov 23 20:00:53 localhost kernel: usb 1-1: Manufacturer: QEMU
Nov 23 20:00:53 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 23 20:00:53 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 23 20:00:53 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 23 20:00:53 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Nov 23 20:00:53 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 23 20:00:53 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 23 20:00:53 localhost systemd[1]: Reached target Initrd /usr File System.
Nov 23 20:00:53 localhost systemd[1]: Reached target Local File Systems.
Nov 23 20:00:53 localhost systemd[1]: Reached target Path Units.
Nov 23 20:00:53 localhost systemd[1]: Reached target Slice Units.
Nov 23 20:00:53 localhost systemd[1]: Reached target Swaps.
Nov 23 20:00:53 localhost systemd[1]: Reached target Timer Units.
Nov 23 20:00:53 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 23 20:00:53 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Nov 23 20:00:53 localhost systemd[1]: Listening on Journal Socket.
Nov 23 20:00:53 localhost systemd[1]: Listening on udev Control Socket.
Nov 23 20:00:53 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 23 20:00:53 localhost systemd[1]: Reached target Socket Units.
Nov 23 20:00:53 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 23 20:00:53 localhost systemd[1]: Starting Journal Service...
Nov 23 20:00:53 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 23 20:00:53 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 23 20:00:53 localhost systemd[1]: Starting Create System Users...
Nov 23 20:00:53 localhost systemd[1]: Starting Setup Virtual Console...
Nov 23 20:00:53 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 23 20:00:53 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 23 20:00:53 localhost systemd[1]: Finished Create System Users.
Nov 23 20:00:53 localhost systemd-journald[303]: Journal started
Nov 23 20:00:53 localhost systemd-journald[303]: Runtime Journal (/run/log/journal/dffd854b01ce4a28b7a632174dbe320c) is 8.0M, max 153.6M, 145.6M free.
Nov 23 20:00:53 localhost systemd-sysusers[308]: Creating group 'users' with GID 100.
Nov 23 20:00:53 localhost systemd-sysusers[308]: Creating group 'dbus' with GID 81.
Nov 23 20:00:53 localhost systemd-sysusers[308]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 23 20:00:53 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 23 20:00:53 localhost systemd[1]: Started Journal Service.
Nov 23 20:00:53 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 23 20:00:53 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 23 20:00:53 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 23 20:00:53 localhost systemd[1]: Finished Setup Virtual Console.
Nov 23 20:00:53 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 23 20:00:53 localhost systemd[1]: Starting dracut cmdline hook...
Nov 23 20:00:53 localhost dracut-cmdline[323]: dracut-9 dracut-057-102.git20250818.el9
Nov 23 20:00:53 localhost dracut-cmdline[323]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 23 20:00:53 localhost systemd[1]: Finished dracut cmdline hook.
Nov 23 20:00:53 localhost systemd[1]: Starting dracut pre-udev hook...
Nov 23 20:00:53 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 23 20:00:53 localhost kernel: device-mapper: uevent: version 1.0.3
Nov 23 20:00:53 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 23 20:00:53 localhost kernel: RPC: Registered named UNIX socket transport module.
Nov 23 20:00:53 localhost kernel: RPC: Registered udp transport module.
Nov 23 20:00:53 localhost kernel: RPC: Registered tcp transport module.
Nov 23 20:00:53 localhost kernel: RPC: Registered tcp-with-tls transport module.
Nov 23 20:00:53 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 23 20:00:53 localhost rpc.statd[440]: Version 2.5.4 starting
Nov 23 20:00:53 localhost rpc.statd[440]: Initializing NSM state
Nov 23 20:00:53 localhost rpc.idmapd[445]: Setting log level to 0
Nov 23 20:00:53 localhost systemd[1]: Finished dracut pre-udev hook.
Nov 23 20:00:53 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 23 20:00:53 localhost systemd-udevd[458]: Using default interface naming scheme 'rhel-9.0'.
Nov 23 20:00:53 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 23 20:00:53 localhost systemd[1]: Starting dracut pre-trigger hook...
Nov 23 20:00:53 localhost systemd[1]: Finished dracut pre-trigger hook.
Nov 23 20:00:53 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 23 20:00:53 localhost systemd[1]: Created slice Slice /system/modprobe.
Nov 23 20:00:53 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 23 20:00:53 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 23 20:00:53 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 23 20:00:53 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 23 20:00:53 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 23 20:00:53 localhost systemd[1]: Reached target Network.
Nov 23 20:00:53 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 23 20:00:53 localhost systemd[1]: Starting dracut initqueue hook...
Nov 23 20:00:53 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 23 20:00:53 localhost kernel: libata version 3.00 loaded.
Nov 23 20:00:53 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 23 20:00:53 localhost kernel:  vda: vda1
Nov 23 20:00:53 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Nov 23 20:00:53 localhost kernel: scsi host0: ata_piix
Nov 23 20:00:53 localhost kernel: scsi host1: ata_piix
Nov 23 20:00:53 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 23 20:00:53 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 23 20:00:53 localhost systemd-udevd[474]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 20:00:54 localhost systemd[1]: Found device /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 23 20:00:54 localhost systemd[1]: Reached target Initrd Root Device.
Nov 23 20:00:54 localhost systemd[1]: Mounting Kernel Configuration File System...
Nov 23 20:00:54 localhost systemd[1]: Mounted Kernel Configuration File System.
Nov 23 20:00:54 localhost systemd[1]: Reached target System Initialization.
Nov 23 20:00:54 localhost systemd[1]: Reached target Basic System.
Nov 23 20:00:54 localhost kernel: ata1: found unknown device (class 0)
Nov 23 20:00:54 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 23 20:00:54 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 23 20:00:54 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 23 20:00:54 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 23 20:00:54 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 23 20:00:54 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 23 20:00:54 localhost systemd[1]: Finished dracut initqueue hook.
Nov 23 20:00:54 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Nov 23 20:00:54 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Nov 23 20:00:54 localhost systemd[1]: Reached target Remote File Systems.
Nov 23 20:00:54 localhost systemd[1]: Starting dracut pre-mount hook...
Nov 23 20:00:54 localhost systemd[1]: Finished dracut pre-mount hook.
Nov 23 20:00:54 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709...
Nov 23 20:00:54 localhost systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Nov 23 20:00:54 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 23 20:00:54 localhost systemd[1]: Mounting /sysroot...
Nov 23 20:00:55 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 23 20:00:55 localhost kernel: XFS (vda1): Mounting V5 Filesystem 47e3724e-7a1b-439a-9543-b98c9a290709
Nov 23 20:02:24 localhost systemd[1]: sysroot.mount: Mounting timed out. Terminating.
Nov 23 20:02:35 localhost kernel: XFS (vda1): Ending clean mount
Nov 23 20:02:48 localhost systemd[1]: sysroot.mount: Mount process exited, code=killed, status=15/TERM
Nov 23 20:02:48 localhost systemd[1]: Mounted /sysroot.
Nov 23 20:02:48 localhost systemd[1]: Reached target Initrd Root File System.
Nov 23 20:02:48 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 23 20:02:48 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 23 20:02:48 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 23 20:02:48 localhost systemd[1]: Reached target Initrd File Systems.
Nov 23 20:02:48 localhost systemd[1]: Reached target Initrd Default Target.
Nov 23 20:02:48 localhost systemd[1]: Starting dracut mount hook...
Nov 23 20:02:48 localhost systemd[1]: Finished dracut mount hook.
Nov 23 20:02:48 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 23 20:02:48 localhost rpc.idmapd[445]: exiting on signal 15
Nov 23 20:02:48 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 23 20:02:48 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 23 20:02:48 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 23 20:02:48 localhost systemd[1]: Stopped target Network.
Nov 23 20:02:48 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 23 20:02:48 localhost systemd[1]: Stopped target Timer Units.
Nov 23 20:02:48 localhost systemd[1]: dbus.socket: Deactivated successfully.
Nov 23 20:02:48 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 23 20:02:48 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 23 20:02:48 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 23 20:02:48 localhost systemd[1]: Stopped target Initrd Default Target.
Nov 23 20:02:48 localhost systemd[1]: Stopped target Basic System.
Nov 23 20:02:48 localhost systemd[1]: Stopped target Initrd Root Device.
Nov 23 20:02:48 localhost systemd[1]: Stopped target Initrd /usr File System.
Nov 23 20:02:48 localhost systemd[1]: Stopped target Path Units.
Nov 23 20:02:48 localhost systemd[1]: Stopped target Remote File Systems.
Nov 23 20:02:48 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 23 20:02:48 localhost systemd[1]: Stopped target Slice Units.
Nov 23 20:02:48 localhost systemd[1]: Stopped target Socket Units.
Nov 23 20:02:48 localhost systemd[1]: Stopped target System Initialization.
Nov 23 20:02:48 localhost systemd[1]: Stopped target Local File Systems.
Nov 23 20:02:48 localhost systemd[1]: Stopped target Swaps.
Nov 23 20:02:48 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 23 20:02:48 localhost systemd[1]: Stopped dracut mount hook.
Nov 23 20:02:48 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 23 20:02:48 localhost systemd[1]: Stopped dracut pre-mount hook.
Nov 23 20:02:48 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Nov 23 20:02:48 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 23 20:02:48 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 23 20:02:48 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 23 20:02:48 localhost systemd[1]: Stopped dracut initqueue hook.
Nov 23 20:02:48 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 23 20:02:48 localhost systemd[1]: Stopped Apply Kernel Variables.
Nov 23 20:02:48 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 23 20:02:48 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Nov 23 20:02:48 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 23 20:02:48 localhost systemd[1]: Stopped Coldplug All udev Devices.
Nov 23 20:02:48 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 23 20:02:48 localhost systemd[1]: Stopped dracut pre-trigger hook.
Nov 23 20:02:48 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 23 20:02:48 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 23 20:02:48 localhost systemd[1]: Stopped Setup Virtual Console.
Nov 23 20:02:48 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 23 20:02:48 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 23 20:02:48 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 23 20:02:48 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 23 20:02:48 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 23 20:02:48 localhost systemd[1]: Closed udev Control Socket.
Nov 23 20:02:48 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 23 20:02:48 localhost systemd[1]: Closed udev Kernel Socket.
Nov 23 20:02:48 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 23 20:02:48 localhost systemd[1]: Stopped dracut pre-udev hook.
Nov 23 20:02:48 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 23 20:02:48 localhost systemd[1]: Stopped dracut cmdline hook.
Nov 23 20:02:48 localhost systemd[1]: Starting Cleanup udev Database...
Nov 23 20:02:48 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 23 20:02:48 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 23 20:02:48 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 23 20:02:48 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Nov 23 20:02:48 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 23 20:02:48 localhost systemd[1]: Stopped Create System Users.
Nov 23 20:02:48 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 23 20:02:48 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 23 20:02:48 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 23 20:02:48 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 23 20:02:48 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 23 20:02:48 localhost systemd[1]: Finished Cleanup udev Database.
Nov 23 20:02:48 localhost systemd[1]: Reached target Switch Root.
Nov 23 20:02:48 localhost systemd[1]: Starting Switch Root...
Nov 23 20:02:48 localhost systemd[1]: Switching root.
Nov 23 20:02:48 localhost systemd-journald[303]: Journal stopped
Nov 23 20:02:49 localhost systemd-journald[303]: Received SIGTERM from PID 1 (systemd).
Nov 23 20:02:49 localhost kernel: audit: type=1404 audit(1763928168.781:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 23 20:02:49 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 20:02:49 localhost kernel: SELinux:  policy capability open_perms=1
Nov 23 20:02:49 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 20:02:49 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 23 20:02:49 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 20:02:49 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 20:02:49 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 20:02:49 localhost kernel: audit: type=1403 audit(1763928168.951:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 23 20:02:49 localhost systemd[1]: Successfully loaded SELinux policy in 175.501ms.
Nov 23 20:02:49 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 37.254ms.
Nov 23 20:02:49 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 23 20:02:49 localhost systemd[1]: Detected virtualization kvm.
Nov 23 20:02:49 localhost systemd[1]: Detected architecture x86-64.
Nov 23 20:02:49 localhost systemd-rc-local-generator[635]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:02:49 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Nov 23 20:02:49 localhost systemd[1]: Stopped Switch Root.
Nov 23 20:02:49 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 23 20:02:49 localhost systemd[1]: Created slice Slice /system/getty.
Nov 23 20:02:49 localhost systemd[1]: Created slice Slice /system/serial-getty.
Nov 23 20:02:49 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Nov 23 20:02:49 localhost systemd[1]: Created slice User and Session Slice.
Nov 23 20:02:49 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 23 20:02:49 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Nov 23 20:02:49 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 23 20:02:49 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 23 20:02:49 localhost systemd[1]: Stopped target Switch Root.
Nov 23 20:02:49 localhost systemd[1]: Stopped target Initrd File Systems.
Nov 23 20:02:49 localhost systemd[1]: Stopped target Initrd Root File System.
Nov 23 20:02:49 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Nov 23 20:02:49 localhost systemd[1]: Reached target Path Units.
Nov 23 20:02:49 localhost systemd[1]: Reached target rpc_pipefs.target.
Nov 23 20:02:49 localhost systemd[1]: Reached target Slice Units.
Nov 23 20:02:49 localhost systemd[1]: Reached target Swaps.
Nov 23 20:02:49 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Nov 23 20:02:49 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Nov 23 20:02:49 localhost systemd[1]: Reached target RPC Port Mapper.
Nov 23 20:02:49 localhost systemd[1]: Listening on Process Core Dump Socket.
Nov 23 20:02:49 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Nov 23 20:02:49 localhost systemd[1]: Listening on udev Control Socket.
Nov 23 20:02:49 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 23 20:02:49 localhost systemd[1]: Mounting Huge Pages File System...
Nov 23 20:02:49 localhost systemd[1]: Mounting POSIX Message Queue File System...
Nov 23 20:02:49 localhost systemd[1]: Mounting Kernel Debug File System...
Nov 23 20:02:49 localhost systemd[1]: Mounting Kernel Trace File System...
Nov 23 20:02:49 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 23 20:02:49 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 23 20:02:49 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 23 20:02:49 localhost systemd[1]: Starting Load Kernel Module drm...
Nov 23 20:02:49 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Nov 23 20:02:49 localhost systemd[1]: Starting Load Kernel Module fuse...
Nov 23 20:02:49 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 23 20:02:49 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Nov 23 20:02:49 localhost systemd[1]: Stopped File System Check on Root Device.
Nov 23 20:02:49 localhost systemd[1]: Stopped Journal Service.
Nov 23 20:02:49 localhost kernel: fuse: init (API version 7.37)
Nov 23 20:02:49 localhost systemd[1]: Starting Journal Service...
Nov 23 20:02:49 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 23 20:02:49 localhost systemd[1]: Starting Generate network units from Kernel command line...
Nov 23 20:02:49 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 23 20:02:49 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Nov 23 20:02:49 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 23 20:02:49 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 23 20:02:49 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 23 20:02:49 localhost systemd[1]: Mounted Huge Pages File System.
Nov 23 20:02:49 localhost systemd[1]: Mounted POSIX Message Queue File System.
Nov 23 20:02:49 localhost systemd[1]: Mounted Kernel Debug File System.
Nov 23 20:02:49 localhost systemd-journald[676]: Journal started
Nov 23 20:02:49 localhost systemd-journald[676]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 23 20:02:49 localhost systemd[1]: Queued start job for default target Multi-User System.
Nov 23 20:02:49 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 23 20:02:49 localhost systemd[1]: Started Journal Service.
Nov 23 20:02:49 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 23 20:02:49 localhost systemd[1]: Mounted Kernel Trace File System.
Nov 23 20:02:49 localhost kernel: ACPI: bus type drm_connector registered
Nov 23 20:02:49 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 23 20:02:49 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 23 20:02:49 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 23 20:02:49 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 23 20:02:49 localhost systemd[1]: Finished Load Kernel Module drm.
Nov 23 20:02:49 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 23 20:02:49 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 23 20:02:49 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 23 20:02:49 localhost systemd[1]: Finished Load Kernel Module fuse.
Nov 23 20:02:49 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 23 20:02:49 localhost systemd[1]: Finished Generate network units from Kernel command line.
Nov 23 20:02:49 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 23 20:02:49 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 23 20:02:49 localhost systemd[1]: Mounting FUSE Control File System...
Nov 23 20:02:49 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 23 20:02:49 localhost systemd[1]: Starting Rebuild Hardware Database...
Nov 23 20:02:49 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 23 20:02:49 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 23 20:02:49 localhost systemd[1]: Starting Load/Save OS Random Seed...
Nov 23 20:02:49 localhost systemd[1]: Starting Create System Users...
Nov 23 20:02:49 localhost systemd[1]: Mounted FUSE Control File System.
Nov 23 20:02:49 localhost systemd-journald[676]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 23 20:02:49 localhost systemd-journald[676]: Received client request to flush runtime journal.
Nov 23 20:02:49 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 23 20:02:49 localhost systemd[1]: Finished Load/Save OS Random Seed.
Nov 23 20:02:49 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 23 20:02:49 localhost systemd[1]: Finished Create System Users.
Nov 23 20:02:49 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 23 20:02:49 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 23 20:02:49 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 23 20:02:49 localhost systemd[1]: Reached target Preparation for Local File Systems.
Nov 23 20:02:49 localhost systemd[1]: Reached target Local File Systems.
Nov 23 20:02:49 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 23 20:02:49 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 23 20:02:49 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 23 20:02:49 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 23 20:02:49 localhost systemd[1]: Starting Automatic Boot Loader Update...
Nov 23 20:02:49 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 23 20:02:49 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 23 20:02:49 localhost bootctl[695]: Couldn't find EFI system partition, skipping.
Nov 23 20:02:49 localhost systemd[1]: Finished Automatic Boot Loader Update.
Nov 23 20:02:50 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 23 20:02:50 localhost systemd[1]: Starting Security Auditing Service...
Nov 23 20:02:50 localhost systemd[1]: Starting RPC Bind...
Nov 23 20:02:50 localhost systemd[1]: Starting Rebuild Journal Catalog...
Nov 23 20:02:50 localhost auditd[701]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 23 20:02:50 localhost auditd[701]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 23 20:02:50 localhost systemd[1]: Started RPC Bind.
Nov 23 20:02:50 localhost systemd[1]: Finished Rebuild Journal Catalog.
Nov 23 20:02:50 localhost augenrules[706]: /sbin/augenrules: No change
Nov 23 20:02:50 localhost augenrules[721]: No rules
Nov 23 20:02:50 localhost augenrules[721]: enabled 1
Nov 23 20:02:50 localhost augenrules[721]: failure 1
Nov 23 20:02:50 localhost augenrules[721]: pid 701
Nov 23 20:02:50 localhost augenrules[721]: rate_limit 0
Nov 23 20:02:50 localhost augenrules[721]: backlog_limit 8192
Nov 23 20:02:50 localhost augenrules[721]: lost 0
Nov 23 20:02:50 localhost augenrules[721]: backlog 3
Nov 23 20:02:50 localhost augenrules[721]: backlog_wait_time 60000
Nov 23 20:02:50 localhost augenrules[721]: backlog_wait_time_actual 0
Nov 23 20:02:50 localhost augenrules[721]: enabled 1
Nov 23 20:02:50 localhost augenrules[721]: failure 1
Nov 23 20:02:50 localhost augenrules[721]: pid 701
Nov 23 20:02:50 localhost augenrules[721]: rate_limit 0
Nov 23 20:02:50 localhost augenrules[721]: backlog_limit 8192
Nov 23 20:02:50 localhost augenrules[721]: lost 0
Nov 23 20:02:50 localhost augenrules[721]: backlog 3
Nov 23 20:02:50 localhost augenrules[721]: backlog_wait_time 60000
Nov 23 20:02:50 localhost augenrules[721]: backlog_wait_time_actual 0
Nov 23 20:02:50 localhost augenrules[721]: enabled 1
Nov 23 20:02:50 localhost augenrules[721]: failure 1
Nov 23 20:02:50 localhost augenrules[721]: pid 701
Nov 23 20:02:50 localhost augenrules[721]: rate_limit 0
Nov 23 20:02:50 localhost augenrules[721]: backlog_limit 8192
Nov 23 20:02:50 localhost augenrules[721]: lost 0
Nov 23 20:02:50 localhost augenrules[721]: backlog 1
Nov 23 20:02:50 localhost augenrules[721]: backlog_wait_time 60000
Nov 23 20:02:50 localhost augenrules[721]: backlog_wait_time_actual 0
Nov 23 20:02:50 localhost systemd[1]: Started Security Auditing Service.
Nov 23 20:02:50 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 23 20:02:50 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 23 20:02:50 localhost systemd[1]: Finished Rebuild Hardware Database.
Nov 23 20:02:50 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 23 20:02:50 localhost systemd-udevd[729]: Using default interface naming scheme 'rhel-9.0'.
Nov 23 20:02:50 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 23 20:02:50 localhost systemd[1]: Starting Update is Completed...
Nov 23 20:02:50 localhost systemd[1]: Finished Update is Completed.
Nov 23 20:02:50 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 23 20:02:50 localhost systemd[1]: Reached target System Initialization.
Nov 23 20:02:50 localhost systemd[1]: Started dnf makecache --timer.
Nov 23 20:02:50 localhost systemd[1]: Started Daily rotation of log files.
Nov 23 20:02:50 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 23 20:02:50 localhost systemd[1]: Reached target Timer Units.
Nov 23 20:02:50 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 23 20:02:50 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 23 20:02:50 localhost systemd[1]: Reached target Socket Units.
Nov 23 20:02:50 localhost systemd[1]: Starting D-Bus System Message Bus...
Nov 23 20:02:50 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 23 20:02:50 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 23 20:02:50 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 23 20:02:51 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 23 20:02:51 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 23 20:02:51 localhost systemd-udevd[753]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 20:02:51 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 23 20:02:51 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 23 20:02:51 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 23 20:02:51 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 23 20:02:51 localhost systemd[1]: Started D-Bus System Message Bus.
Nov 23 20:02:51 localhost dbus-broker-lau[755]: Ready
Nov 23 20:02:51 localhost systemd[1]: Reached target Basic System.
Nov 23 20:02:51 localhost systemd[1]: Starting NTP client/server...
Nov 23 20:02:51 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 23 20:02:51 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 23 20:02:51 localhost systemd[1]: Starting IPv4 firewall with iptables...
Nov 23 20:02:51 localhost systemd[1]: Started irqbalance daemon.
Nov 23 20:02:51 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 23 20:02:51 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 20:02:51 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 20:02:51 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 20:02:51 localhost systemd[1]: Reached target sshd-keygen.target.
Nov 23 20:02:51 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 23 20:02:51 localhost systemd[1]: Reached target User and Group Name Lookups.
Nov 23 20:02:51 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 23 20:02:51 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 23 20:02:51 localhost kernel: Console: switching to colour dummy device 80x25
Nov 23 20:02:51 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 23 20:02:51 localhost kernel: [drm] features: -context_init
Nov 23 20:02:51 localhost kernel: [drm] number of scanouts: 1
Nov 23 20:02:51 localhost kernel: [drm] number of cap sets: 0
Nov 23 20:02:51 localhost systemd[1]: Starting User Login Management...
Nov 23 20:02:51 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 23 20:02:51 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 23 20:02:51 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 23 20:02:51 localhost kernel: Console: switching to colour frame buffer device 128x48
Nov 23 20:02:51 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 23 20:02:51 localhost chronyd[808]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 23 20:02:51 localhost chronyd[808]: Loaded 0 symmetric keys
Nov 23 20:02:51 localhost chronyd[808]: Using right/UTC timezone to obtain leap second data
Nov 23 20:02:51 localhost chronyd[808]: Loaded seccomp filter (level 2)
Nov 23 20:02:51 localhost systemd[1]: Started NTP client/server.
Nov 23 20:02:51 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 23 20:02:51 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 23 20:02:51 localhost systemd-logind[793]: New seat seat0.
Nov 23 20:02:51 localhost systemd-logind[793]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 23 20:02:51 localhost systemd-logind[793]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 23 20:02:51 localhost systemd[1]: Started User Login Management.
Nov 23 20:02:51 localhost kernel: kvm_amd: TSC scaling supported
Nov 23 20:02:51 localhost kernel: kvm_amd: Nested Virtualization enabled
Nov 23 20:02:51 localhost kernel: kvm_amd: Nested Paging enabled
Nov 23 20:02:51 localhost kernel: kvm_amd: LBR virtualization supported
Nov 23 20:02:51 localhost iptables.init[785]: iptables: Applying firewall rules: [  OK  ]
Nov 23 20:02:51 localhost systemd[1]: Finished IPv4 firewall with iptables.
Nov 23 20:02:52 localhost cloud-init[838]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sun, 23 Nov 2025 20:02:52 +0000. Up 120.68 seconds.
Nov 23 20:02:52 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Nov 23 20:02:52 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Nov 23 20:02:52 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpkxj0f9qi.mount: Deactivated successfully.
Nov 23 20:02:52 localhost systemd[1]: Starting Hostname Service...
Nov 23 20:02:52 localhost systemd[1]: Started Hostname Service.
Nov 23 20:02:52 np0005532762.novalocal systemd-hostnamed[852]: Hostname set to <np0005532762.novalocal> (static)
Nov 23 20:02:52 np0005532762.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 23 20:02:52 np0005532762.novalocal systemd[1]: Reached target Preparation for Network.
Nov 23 20:02:52 np0005532762.novalocal systemd[1]: Starting Network Manager...
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.6954] NetworkManager (version 1.54.1-1.el9) is starting... (boot:6edcf464-8554-408a-ba56-0bae3cf8aec4)
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.6959] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7100] manager[0x55956a72c080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7154] hostname: hostname: using hostnamed
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7155] hostname: static hostname changed from (none) to "np0005532762.novalocal"
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7159] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7307] manager[0x55956a72c080]: rfkill: Wi-Fi hardware radio set enabled
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7310] manager[0x55956a72c080]: rfkill: WWAN hardware radio set enabled
Nov 23 20:02:52 np0005532762.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7412] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7413] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7413] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7414] manager: Networking is enabled by state file
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7416] settings: Loaded settings plugin: keyfile (internal)
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7468] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7497] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7525] dhcp: init: Using DHCP client 'internal'
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7529] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7544] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7559] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7568] device (lo): Activation: starting connection 'lo' (170402d3-84eb-4bc9-a75c-092c5ddf07e9)
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7578] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7581] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7614] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 23 20:02:52 np0005532762.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7619] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7621] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7622] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7624] device (eth0): carrier: link connected
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7625] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7630] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7639] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 23 20:02:52 np0005532762.novalocal systemd[1]: Started Network Manager.
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7645] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7646] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7649] manager: NetworkManager state is now CONNECTING
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7650] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7659] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7661] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 23 20:02:52 np0005532762.novalocal systemd[1]: Reached target Network.
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7697] dhcp4 (eth0): state changed new lease, address=38.102.83.106
Nov 23 20:02:52 np0005532762.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7705] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7727] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 20:02:52 np0005532762.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 23 20:02:52 np0005532762.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7924] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7926] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7927] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7933] device (lo): Activation: successful, device activated.
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7938] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7941] manager: NetworkManager state is now CONNECTED_SITE
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7945] device (eth0): Activation: successful, device activated.
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7950] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 23 20:02:52 np0005532762.novalocal NetworkManager[856]: <info>  [1763928172.7954] manager: startup complete
Nov 23 20:02:52 np0005532762.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Nov 23 20:02:52 np0005532762.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 23 20:02:52 np0005532762.novalocal systemd[1]: Reached target NFS client services.
Nov 23 20:02:52 np0005532762.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Nov 23 20:02:52 np0005532762.novalocal systemd[1]: Reached target Remote File Systems.
Nov 23 20:02:52 np0005532762.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 23 20:02:52 np0005532762.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 23 20:02:52 np0005532762.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Nov 23 20:02:53 np0005532762.novalocal cloud-init[920]: Cloud-init v. 24.4-7.el9 running 'init' at Sun, 23 Nov 2025 20:02:53 +0000. Up 121.73 seconds.
Nov 23 20:02:53 np0005532762.novalocal cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 23 20:02:53 np0005532762.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 23 20:02:53 np0005532762.novalocal cloud-init[920]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 23 20:02:53 np0005532762.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 23 20:02:53 np0005532762.novalocal cloud-init[920]: ci-info: |  eth0  | True |        38.102.83.106         | 255.255.255.0 | global | fa:16:3e:47:56:6b |
Nov 23 20:02:53 np0005532762.novalocal cloud-init[920]: ci-info: |  eth0  | True | fe80::f816:3eff:fe47:566b/64 |       .       |  link  | fa:16:3e:47:56:6b |
Nov 23 20:02:53 np0005532762.novalocal cloud-init[920]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 23 20:02:53 np0005532762.novalocal cloud-init[920]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 23 20:02:53 np0005532762.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 23 20:02:53 np0005532762.novalocal cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 23 20:02:53 np0005532762.novalocal cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 23 20:02:53 np0005532762.novalocal cloud-init[920]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 23 20:02:53 np0005532762.novalocal cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 23 20:02:53 np0005532762.novalocal cloud-init[920]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 23 20:02:53 np0005532762.novalocal cloud-init[920]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 23 20:02:53 np0005532762.novalocal cloud-init[920]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 23 20:02:53 np0005532762.novalocal cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 23 20:02:53 np0005532762.novalocal cloud-init[920]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 23 20:02:53 np0005532762.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 23 20:02:53 np0005532762.novalocal cloud-init[920]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 23 20:02:53 np0005532762.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 23 20:02:53 np0005532762.novalocal cloud-init[920]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 23 20:02:53 np0005532762.novalocal cloud-init[920]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 23 20:02:53 np0005532762.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 23 20:02:54 np0005532762.novalocal useradd[987]: new group: name=cloud-user, GID=1001
Nov 23 20:02:54 np0005532762.novalocal useradd[987]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Nov 23 20:02:54 np0005532762.novalocal useradd[987]: add 'cloud-user' to group 'adm'
Nov 23 20:02:54 np0005532762.novalocal useradd[987]: add 'cloud-user' to group 'systemd-journal'
Nov 23 20:02:54 np0005532762.novalocal useradd[987]: add 'cloud-user' to shadow group 'adm'
Nov 23 20:02:54 np0005532762.novalocal useradd[987]: add 'cloud-user' to shadow group 'systemd-journal'
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: Generating public/private rsa key pair.
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: The key fingerprint is:
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: SHA256:2X/a+x3yzoKiOQtmqaCDXQNokkjl6+60CwjQt540rQw root@np0005532762.novalocal
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: The key's randomart image is:
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: +---[RSA 3072]----+
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: |  ..             |
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: | o.              |
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: |+o...            |
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: |*....o   o       |
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: |+ Eo+ . S .      |
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: |o .=o+.    .     |
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: |o+ +=*      o... |
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: |+ * = .... . =+ o|
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: |...*.  ++ . . =*o|
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: +----[SHA256]-----+
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: Generating public/private ecdsa key pair.
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: The key fingerprint is:
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: SHA256:lOGSmVM7XlPQR0FLASu744xqdsqMd652eN3QM/lMVPg root@np0005532762.novalocal
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: The key's randomart image is:
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: +---[ECDSA 256]---+
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: |        o .oo+*+ |
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: |       * + ..oo..|
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: |      * * + ...o |
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: |       = o +  . E|
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: |        S .. o   |
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: |          ..= .  |
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: |       . .oo *   |
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: |     +* =+... o  |
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: |    .=*Xo o      |
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: +----[SHA256]-----+
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: Generating public/private ed25519 key pair.
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: The key fingerprint is:
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: SHA256:S7mzEKj5Gi6Efkt6Y9s35zwwLQUqFptapNpL6wpsgkg root@np0005532762.novalocal
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: The key's randomart image is:
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: +--[ED25519 256]--+
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: |                 |
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: |   o   .         |
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: |  o + . .        |
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: | . * o   o       |
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: |oE+ o . S        |
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: |Booo   * +       |
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: |B+++  . B        |
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: |=o=B.  +.=       |
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: |.=B+=.. =o.      |
Nov 23 20:02:55 np0005532762.novalocal cloud-init[920]: +----[SHA256]-----+
Nov 23 20:02:55 np0005532762.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Nov 23 20:02:55 np0005532762.novalocal systemd[1]: Reached target Cloud-config availability.
Nov 23 20:02:55 np0005532762.novalocal systemd[1]: Reached target Network is Online.
Nov 23 20:02:55 np0005532762.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Nov 23 20:02:55 np0005532762.novalocal systemd[1]: Starting Crash recovery kernel arming...
Nov 23 20:02:55 np0005532762.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Nov 23 20:02:55 np0005532762.novalocal systemd[1]: Starting System Logging Service...
Nov 23 20:02:55 np0005532762.novalocal sm-notify[1003]: Version 2.5.4 starting
Nov 23 20:02:55 np0005532762.novalocal systemd[1]: Starting OpenSSH server daemon...
Nov 23 20:02:55 np0005532762.novalocal systemd[1]: Starting Permit User Sessions...
Nov 23 20:02:55 np0005532762.novalocal systemd[1]: Started Notify NFS peers of a restart.
Nov 23 20:02:55 np0005532762.novalocal systemd[1]: Finished Permit User Sessions.
Nov 23 20:02:55 np0005532762.novalocal sshd[1005]: Server listening on 0.0.0.0 port 22.
Nov 23 20:02:55 np0005532762.novalocal sshd[1005]: Server listening on :: port 22.
Nov 23 20:02:55 np0005532762.novalocal systemd[1]: Started Command Scheduler.
Nov 23 20:02:55 np0005532762.novalocal rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] start
Nov 23 20:02:55 np0005532762.novalocal crond[1008]: (CRON) STARTUP (1.5.7)
Nov 23 20:02:55 np0005532762.novalocal rsyslogd[1004]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 23 20:02:55 np0005532762.novalocal crond[1008]: (CRON) INFO (Syslog will be used instead of sendmail.)
Nov 23 20:02:55 np0005532762.novalocal crond[1008]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 58% if used.)
Nov 23 20:02:55 np0005532762.novalocal crond[1008]: (CRON) INFO (running with inotify support)
Nov 23 20:02:55 np0005532762.novalocal systemd[1]: Started Getty on tty1.
Nov 23 20:02:55 np0005532762.novalocal systemd[1]: Started Serial Getty on ttyS0.
Nov 23 20:02:55 np0005532762.novalocal systemd[1]: Reached target Login Prompts.
Nov 23 20:02:55 np0005532762.novalocal systemd[1]: Started OpenSSH server daemon.
Nov 23 20:02:55 np0005532762.novalocal systemd[1]: Started System Logging Service.
Nov 23 20:02:55 np0005532762.novalocal systemd[1]: Reached target Multi-User System.
Nov 23 20:02:55 np0005532762.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 23 20:02:55 np0005532762.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 23 20:02:55 np0005532762.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 23 20:02:55 np0005532762.novalocal rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 20:02:55 np0005532762.novalocal kdumpctl[1012]: kdump: No kdump initial ramdisk found.
Nov 23 20:02:55 np0005532762.novalocal kdumpctl[1012]: kdump: Rebuilding /boot/initramfs-5.14.0-639.el9.x86_64kdump.img
Nov 23 20:02:55 np0005532762.novalocal cloud-init[1132]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sun, 23 Nov 2025 20:02:55 +0000. Up 124.01 seconds.
Nov 23 20:02:55 np0005532762.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Nov 23 20:02:55 np0005532762.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Nov 23 20:02:55 np0005532762.novalocal dracut[1267]: dracut-057-102.git20250818.el9
Nov 23 20:02:55 np0005532762.novalocal sshd-session[1272]: Unable to negotiate with 38.102.83.114 port 56682: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Nov 23 20:02:55 np0005532762.novalocal sshd-session[1262]: Connection closed by 38.102.83.114 port 48100 [preauth]
Nov 23 20:02:55 np0005532762.novalocal sshd-session[1285]: Connection reset by 38.102.83.114 port 56688 [preauth]
Nov 23 20:02:55 np0005532762.novalocal sshd-session[1289]: Unable to negotiate with 38.102.83.114 port 56694: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Nov 23 20:02:55 np0005532762.novalocal cloud-init[1290]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sun, 23 Nov 2025 20:02:55 +0000. Up 124.46 seconds.
Nov 23 20:02:55 np0005532762.novalocal sshd-session[1292]: Unable to negotiate with 38.102.83.114 port 56706: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Nov 23 20:02:55 np0005532762.novalocal dracut[1269]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-639.el9.x86_64kdump.img 5.14.0-639.el9.x86_64
Nov 23 20:02:55 np0005532762.novalocal cloud-init[1319]: #############################################################
Nov 23 20:02:55 np0005532762.novalocal cloud-init[1322]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 23 20:02:55 np0005532762.novalocal cloud-init[1329]: 256 SHA256:lOGSmVM7XlPQR0FLASu744xqdsqMd652eN3QM/lMVPg root@np0005532762.novalocal (ECDSA)
Nov 23 20:02:55 np0005532762.novalocal cloud-init[1336]: 256 SHA256:S7mzEKj5Gi6Efkt6Y9s35zwwLQUqFptapNpL6wpsgkg root@np0005532762.novalocal (ED25519)
Nov 23 20:02:55 np0005532762.novalocal cloud-init[1343]: 3072 SHA256:2X/a+x3yzoKiOQtmqaCDXQNokkjl6+60CwjQt540rQw root@np0005532762.novalocal (RSA)
Nov 23 20:02:55 np0005532762.novalocal cloud-init[1346]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 23 20:02:55 np0005532762.novalocal cloud-init[1351]: #############################################################
Nov 23 20:02:56 np0005532762.novalocal sshd-session[1359]: Unable to negotiate with 38.102.83.114 port 56724: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Nov 23 20:02:56 np0005532762.novalocal sshd-session[1296]: Connection closed by 38.102.83.114 port 56710 [preauth]
Nov 23 20:02:56 np0005532762.novalocal sshd-session[1362]: Unable to negotiate with 38.102.83.114 port 56728: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Nov 23 20:02:56 np0005532762.novalocal cloud-init[1290]: Cloud-init v. 24.4-7.el9 finished at Sun, 23 Nov 2025 20:02:56 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 124.66 seconds
Nov 23 20:02:56 np0005532762.novalocal sshd-session[1311]: Connection closed by 38.102.83.114 port 56714 [preauth]
Nov 23 20:02:56 np0005532762.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Nov 23 20:02:56 np0005532762.novalocal systemd[1]: Reached target Cloud-init target.
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: Module 'resume' will not be installed, because it's in the list to be omitted!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 23 20:02:56 np0005532762.novalocal dracut[1269]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: memstrack is not available
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: memstrack is not available
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: *** Including module: systemd ***
Nov 23 20:02:57 np0005532762.novalocal dracut[1269]: *** Including module: fips ***
Nov 23 20:02:58 np0005532762.novalocal dracut[1269]: *** Including module: systemd-initrd ***
Nov 23 20:02:58 np0005532762.novalocal dracut[1269]: *** Including module: i18n ***
Nov 23 20:02:58 np0005532762.novalocal dracut[1269]: *** Including module: drm ***
Nov 23 20:02:58 np0005532762.novalocal dracut[1269]: *** Including module: prefixdevname ***
Nov 23 20:02:58 np0005532762.novalocal dracut[1269]: *** Including module: kernel-modules ***
Nov 23 20:02:58 np0005532762.novalocal kernel: block vda: the capability attribute has been deprecated.
Nov 23 20:02:59 np0005532762.novalocal chronyd[808]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Nov 23 20:02:59 np0005532762.novalocal chronyd[808]: System clock TAI offset set to 37 seconds
Nov 23 20:02:59 np0005532762.novalocal dracut[1269]: *** Including module: kernel-modules-extra ***
Nov 23 20:02:59 np0005532762.novalocal dracut[1269]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Nov 23 20:02:59 np0005532762.novalocal dracut[1269]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Nov 23 20:02:59 np0005532762.novalocal dracut[1269]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Nov 23 20:02:59 np0005532762.novalocal dracut[1269]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Nov 23 20:02:59 np0005532762.novalocal dracut[1269]: *** Including module: qemu ***
Nov 23 20:02:59 np0005532762.novalocal dracut[1269]: *** Including module: fstab-sys ***
Nov 23 20:02:59 np0005532762.novalocal dracut[1269]: *** Including module: rootfs-block ***
Nov 23 20:02:59 np0005532762.novalocal dracut[1269]: *** Including module: terminfo ***
Nov 23 20:02:59 np0005532762.novalocal dracut[1269]: *** Including module: udev-rules ***
Nov 23 20:03:00 np0005532762.novalocal dracut[1269]: Skipping udev rule: 91-permissions.rules
Nov 23 20:03:00 np0005532762.novalocal dracut[1269]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 23 20:03:00 np0005532762.novalocal dracut[1269]: *** Including module: virtiofs ***
Nov 23 20:03:00 np0005532762.novalocal dracut[1269]: *** Including module: dracut-systemd ***
Nov 23 20:03:00 np0005532762.novalocal dracut[1269]: *** Including module: usrmount ***
Nov 23 20:03:00 np0005532762.novalocal dracut[1269]: *** Including module: base ***
Nov 23 20:03:00 np0005532762.novalocal dracut[1269]: *** Including module: fs-lib ***
Nov 23 20:03:00 np0005532762.novalocal dracut[1269]: *** Including module: kdumpbase ***
Nov 23 20:03:00 np0005532762.novalocal dracut[1269]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 23 20:03:00 np0005532762.novalocal dracut[1269]:   microcode_ctl module: mangling fw_dir
Nov 23 20:03:00 np0005532762.novalocal dracut[1269]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 23 20:03:01 np0005532762.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 23 20:03:01 np0005532762.novalocal dracut[1269]:     microcode_ctl: configuration "intel" is ignored
Nov 23 20:03:01 np0005532762.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 23 20:03:01 np0005532762.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 23 20:03:01 np0005532762.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 23 20:03:01 np0005532762.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 23 20:03:01 np0005532762.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 23 20:03:01 np0005532762.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 23 20:03:01 np0005532762.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 23 20:03:01 np0005532762.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 23 20:03:01 np0005532762.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 23 20:03:01 np0005532762.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 23 20:03:01 np0005532762.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 23 20:03:01 np0005532762.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 23 20:03:01 np0005532762.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 23 20:03:01 np0005532762.novalocal irqbalance[786]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 23 20:03:01 np0005532762.novalocal irqbalance[786]: IRQ 25 affinity is now unmanaged
Nov 23 20:03:01 np0005532762.novalocal irqbalance[786]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 23 20:03:01 np0005532762.novalocal irqbalance[786]: IRQ 31 affinity is now unmanaged
Nov 23 20:03:01 np0005532762.novalocal irqbalance[786]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 23 20:03:01 np0005532762.novalocal irqbalance[786]: IRQ 28 affinity is now unmanaged
Nov 23 20:03:01 np0005532762.novalocal irqbalance[786]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 23 20:03:01 np0005532762.novalocal irqbalance[786]: IRQ 32 affinity is now unmanaged
Nov 23 20:03:01 np0005532762.novalocal irqbalance[786]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 23 20:03:01 np0005532762.novalocal irqbalance[786]: IRQ 30 affinity is now unmanaged
Nov 23 20:03:01 np0005532762.novalocal irqbalance[786]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 23 20:03:01 np0005532762.novalocal irqbalance[786]: IRQ 29 affinity is now unmanaged
Nov 23 20:03:01 np0005532762.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 23 20:03:01 np0005532762.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 23 20:03:01 np0005532762.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 23 20:03:01 np0005532762.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 23 20:03:01 np0005532762.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 23 20:03:01 np0005532762.novalocal dracut[1269]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 23 20:03:01 np0005532762.novalocal dracut[1269]: *** Including module: openssl ***
Nov 23 20:03:01 np0005532762.novalocal dracut[1269]: *** Including module: shutdown ***
Nov 23 20:03:01 np0005532762.novalocal dracut[1269]: *** Including module: squash ***
Nov 23 20:03:01 np0005532762.novalocal dracut[1269]: *** Including modules done ***
Nov 23 20:03:01 np0005532762.novalocal dracut[1269]: *** Installing kernel module dependencies ***
Nov 23 20:03:02 np0005532762.novalocal dracut[1269]: *** Installing kernel module dependencies done ***
Nov 23 20:03:02 np0005532762.novalocal dracut[1269]: *** Resolving executable dependencies ***
Nov 23 20:03:02 np0005532762.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 23 20:03:05 np0005532762.novalocal dracut[1269]: *** Resolving executable dependencies done ***
Nov 23 20:03:05 np0005532762.novalocal dracut[1269]: *** Generating early-microcode cpio image ***
Nov 23 20:03:05 np0005532762.novalocal dracut[1269]: *** Store current command line parameters ***
Nov 23 20:03:05 np0005532762.novalocal dracut[1269]: Stored kernel commandline:
Nov 23 20:03:05 np0005532762.novalocal dracut[1269]: No dracut internal kernel commandline stored in the initramfs
Nov 23 20:03:06 np0005532762.novalocal dracut[1269]: *** Install squash loader ***
Nov 23 20:03:07 np0005532762.novalocal dracut[1269]: *** Squashing the files inside the initramfs ***
Nov 23 20:03:08 np0005532762.novalocal sshd-session[4142]: Accepted publickey for zuul from 38.102.83.114 port 56970 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Nov 23 20:03:08 np0005532762.novalocal systemd[1]: Created slice User Slice of UID 1000.
Nov 23 20:03:09 np0005532762.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 23 20:03:09 np0005532762.novalocal systemd-logind[793]: New session 1 of user zuul.
Nov 23 20:03:09 np0005532762.novalocal dracut[1269]: *** Squashing the files inside the initramfs done ***
Nov 23 20:03:09 np0005532762.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 23 20:03:09 np0005532762.novalocal dracut[1269]: *** Creating image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' ***
Nov 23 20:03:09 np0005532762.novalocal dracut[1269]: *** Hardlinking files ***
Nov 23 20:03:09 np0005532762.novalocal systemd[1]: Starting User Manager for UID 1000...
Nov 23 20:03:09 np0005532762.novalocal dracut[1269]: Mode:           real
Nov 23 20:03:09 np0005532762.novalocal dracut[1269]: Files:          50
Nov 23 20:03:09 np0005532762.novalocal systemd[4152]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:03:09 np0005532762.novalocal dracut[1269]: Linked:         0 files
Nov 23 20:03:09 np0005532762.novalocal dracut[1269]: Compared:       0 xattrs
Nov 23 20:03:09 np0005532762.novalocal dracut[1269]: Compared:       0 files
Nov 23 20:03:09 np0005532762.novalocal dracut[1269]: Saved:          0 B
Nov 23 20:03:09 np0005532762.novalocal dracut[1269]: Duration:       0.000584 seconds
Nov 23 20:03:09 np0005532762.novalocal dracut[1269]: *** Hardlinking files done ***
Nov 23 20:03:09 np0005532762.novalocal systemd[4152]: Queued start job for default target Main User Target.
Nov 23 20:03:09 np0005532762.novalocal systemd[4152]: Created slice User Application Slice.
Nov 23 20:03:09 np0005532762.novalocal systemd[4152]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 23 20:03:09 np0005532762.novalocal systemd[4152]: Started Daily Cleanup of User's Temporary Directories.
Nov 23 20:03:09 np0005532762.novalocal systemd[4152]: Reached target Paths.
Nov 23 20:03:09 np0005532762.novalocal systemd[4152]: Reached target Timers.
Nov 23 20:03:09 np0005532762.novalocal systemd[4152]: Starting D-Bus User Message Bus Socket...
Nov 23 20:03:09 np0005532762.novalocal systemd[4152]: Starting Create User's Volatile Files and Directories...
Nov 23 20:03:09 np0005532762.novalocal systemd[4152]: Finished Create User's Volatile Files and Directories.
Nov 23 20:03:09 np0005532762.novalocal systemd[4152]: Listening on D-Bus User Message Bus Socket.
Nov 23 20:03:09 np0005532762.novalocal systemd[4152]: Reached target Sockets.
Nov 23 20:03:09 np0005532762.novalocal systemd[4152]: Reached target Basic System.
Nov 23 20:03:09 np0005532762.novalocal systemd[4152]: Reached target Main User Target.
Nov 23 20:03:09 np0005532762.novalocal systemd[4152]: Startup finished in 133ms.
Nov 23 20:03:09 np0005532762.novalocal systemd[1]: Started User Manager for UID 1000.
Nov 23 20:03:09 np0005532762.novalocal systemd[1]: Started Session 1 of User zuul.
Nov 23 20:03:09 np0005532762.novalocal sshd-session[4142]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:03:10 np0005532762.novalocal dracut[1269]: *** Creating initramfs image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' done ***
Nov 23 20:03:10 np0005532762.novalocal python3[4266]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:03:10 np0005532762.novalocal kdumpctl[1012]: kdump: kexec: loaded kdump kernel
Nov 23 20:03:10 np0005532762.novalocal kdumpctl[1012]: kdump: Starting kdump: [OK]
Nov 23 20:03:10 np0005532762.novalocal systemd[1]: Finished Crash recovery kernel arming.
Nov 23 20:03:10 np0005532762.novalocal systemd[1]: Startup finished in 1.493s (kernel) + 1min 55.898s (initrd) + 22.192s (userspace) = 2min 19.584s.
Nov 23 20:03:14 np0005532762.novalocal python3[4408]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:03:21 np0005532762.novalocal python3[4466]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:03:22 np0005532762.novalocal python3[4506]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 23 20:03:22 np0005532762.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 23 20:03:24 np0005532762.novalocal python3[4534]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtaVH+Hfp24GC/nLOCl87TIJDf22iIpXaDmkip6hyFZ60lyVpfYxFl6Z4FqAbKci+Ock4NHD78xcKBN+nqpMJyIdLDl6IlqwxWyUc/lX5/TIm6PknK9ykLQzLzQZzRt1Mk1hK89Am3bbY9TVh2ZdujVyOmjWLVqA/0FhkvYKJWaid0pgs6EdTygKGzSfc7V7Zm4ijA+aHyny1AE6h4zzdGP/d6AL8fjaGD/LpcU6DnbbD9WHzrmCJXOyJa5/Ky5sttSY3WpH33eL7o554W1og4Dq5c+z/Pc0NlJT1DXPpxrtrLpJ57vb04Ae1Wg5PeG+MECxQWJRQBS51hNbLb4KTkDErpMaWbfcwdnzisQHazTgjNidmG34/j4ZvJ/NP2OkEBabHukyMvOCFw3Ew9lQ5eR2EiNjFtdvI12kRiXyyk9Ti3dsncy9kfInD5nPUeVGnxbIGdwP/T5Z2crXhgdrIWCRjRMvV/756tjKFXfzl/eIzO6UcLkU2I9qdqZpL0h8U= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:03:24 np0005532762.novalocal python3[4558]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:03:25 np0005532762.novalocal python3[4657]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 20:03:25 np0005532762.novalocal python3[4728]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763928205.0392907-252-198545758705626/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=b927b3f7e94443b59884cfdc0421ba80_id_rsa follow=False checksum=b8b11f458d3dcaed5d0ce620e052c77faf8a3312 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:03:26 np0005532762.novalocal python3[4851]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 20:03:26 np0005532762.novalocal python3[4922]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763928206.0701554-307-143178181218506/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=b927b3f7e94443b59884cfdc0421ba80_id_rsa.pub follow=False checksum=c143f6be1d4420dad576f5c3c6738e84bfb79a9b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:03:28 np0005532762.novalocal python3[4970]: ansible-ping Invoked with data=pong
Nov 23 20:03:29 np0005532762.novalocal python3[4994]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:03:31 np0005532762.novalocal python3[5052]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 23 20:03:32 np0005532762.novalocal python3[5084]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:03:32 np0005532762.novalocal python3[5108]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:03:33 np0005532762.novalocal python3[5132]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:03:34 np0005532762.novalocal python3[5156]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:03:34 np0005532762.novalocal python3[5180]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:03:34 np0005532762.novalocal python3[5204]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:03:36 np0005532762.novalocal sudo[5228]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rntrgqhxhyngyevmpykvhdlorfubepjj ; /usr/bin/python3'
Nov 23 20:03:36 np0005532762.novalocal sudo[5228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:03:37 np0005532762.novalocal python3[5230]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:03:37 np0005532762.novalocal sudo[5228]: pam_unix(sudo:session): session closed for user root
Nov 23 20:03:37 np0005532762.novalocal sudo[5306]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgmolfnteuanlfqdmkvyagpfzwyusoje ; /usr/bin/python3'
Nov 23 20:03:37 np0005532762.novalocal sudo[5306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:03:37 np0005532762.novalocal python3[5308]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 20:03:37 np0005532762.novalocal sudo[5306]: pam_unix(sudo:session): session closed for user root
Nov 23 20:03:38 np0005532762.novalocal sudo[5379]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmdsjrmrzhonsrecvhuhjqfurkjvfcgh ; /usr/bin/python3'
Nov 23 20:03:38 np0005532762.novalocal sudo[5379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:03:38 np0005532762.novalocal python3[5381]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763928217.2989328-32-185764935004977/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:03:38 np0005532762.novalocal sudo[5379]: pam_unix(sudo:session): session closed for user root
Nov 23 20:03:38 np0005532762.novalocal python3[5429]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:03:39 np0005532762.novalocal python3[5453]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:03:39 np0005532762.novalocal python3[5477]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:03:39 np0005532762.novalocal python3[5501]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:03:40 np0005532762.novalocal python3[5525]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:03:40 np0005532762.novalocal python3[5549]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:03:40 np0005532762.novalocal python3[5573]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:03:40 np0005532762.novalocal python3[5597]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:03:41 np0005532762.novalocal python3[5621]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:03:41 np0005532762.novalocal python3[5645]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:03:41 np0005532762.novalocal python3[5669]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:03:42 np0005532762.novalocal python3[5693]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:03:42 np0005532762.novalocal python3[5717]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:03:42 np0005532762.novalocal python3[5741]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:03:43 np0005532762.novalocal python3[5765]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:03:43 np0005532762.novalocal python3[5789]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:03:43 np0005532762.novalocal python3[5813]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:03:43 np0005532762.novalocal python3[5837]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:03:44 np0005532762.novalocal python3[5861]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:03:44 np0005532762.novalocal python3[5885]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:03:44 np0005532762.novalocal python3[5909]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:03:45 np0005532762.novalocal python3[5933]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:03:45 np0005532762.novalocal python3[5957]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:03:45 np0005532762.novalocal python3[5981]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:03:46 np0005532762.novalocal python3[6005]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:03:46 np0005532762.novalocal python3[6029]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:03:48 np0005532762.novalocal sudo[6053]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhikhrhwedafdgaghcwzvotcutlsatzj ; /usr/bin/python3'
Nov 23 20:03:48 np0005532762.novalocal sudo[6053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:03:48 np0005532762.novalocal python3[6055]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 23 20:03:48 np0005532762.novalocal systemd[1]: Starting Time & Date Service...
Nov 23 20:03:48 np0005532762.novalocal systemd[1]: Started Time & Date Service.
Nov 23 20:03:48 np0005532762.novalocal systemd-timedated[6057]: Changed time zone to 'UTC' (UTC).
Nov 23 20:03:48 np0005532762.novalocal sudo[6053]: pam_unix(sudo:session): session closed for user root
Nov 23 20:03:49 np0005532762.novalocal sudo[6084]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpvzggzocmxkfbfbspyiidopgexcjdoq ; /usr/bin/python3'
Nov 23 20:03:49 np0005532762.novalocal sudo[6084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:03:49 np0005532762.novalocal python3[6086]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:03:49 np0005532762.novalocal sudo[6084]: pam_unix(sudo:session): session closed for user root
Nov 23 20:03:49 np0005532762.novalocal python3[6162]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 20:03:49 np0005532762.novalocal python3[6233]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1763928229.4444351-253-21382755043799/source _original_basename=tmpd23_9fs0 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:03:50 np0005532762.novalocal python3[6333]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 20:03:51 np0005532762.novalocal python3[6404]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1763928230.4333925-303-278983814884466/source _original_basename=tmpqvy1r4lj follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:03:51 np0005532762.novalocal sudo[6504]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaeszshvzkcttzdjfewspshtfzmmkooa ; /usr/bin/python3'
Nov 23 20:03:51 np0005532762.novalocal sudo[6504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:03:52 np0005532762.novalocal python3[6506]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 20:03:52 np0005532762.novalocal sudo[6504]: pam_unix(sudo:session): session closed for user root
Nov 23 20:03:52 np0005532762.novalocal sudo[6577]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-altmnesogttvwjbdkjiqxcrdbdsaszio ; /usr/bin/python3'
Nov 23 20:03:52 np0005532762.novalocal sudo[6577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:03:52 np0005532762.novalocal python3[6579]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1763928231.69662-382-96088037328665/source _original_basename=tmp9nuo4d3z follow=False checksum=c8c0add412d571e63862b10c4bf0a26f0fcae547 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:03:52 np0005532762.novalocal sudo[6577]: pam_unix(sudo:session): session closed for user root
Nov 23 20:03:53 np0005532762.novalocal python3[6627]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:03:53 np0005532762.novalocal python3[6653]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:03:53 np0005532762.novalocal sudo[6731]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpiwkijmqixumcxmmjaruzscblaoqzdk ; /usr/bin/python3'
Nov 23 20:03:53 np0005532762.novalocal sudo[6731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:03:53 np0005532762.novalocal python3[6733]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 20:03:53 np0005532762.novalocal sudo[6731]: pam_unix(sudo:session): session closed for user root
Nov 23 20:03:53 np0005532762.novalocal sudo[6804]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okazdlaqybgbuymxmlarmvpspxfxxfsu ; /usr/bin/python3'
Nov 23 20:03:53 np0005532762.novalocal sudo[6804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:03:54 np0005532762.novalocal python3[6806]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1763928233.4755368-452-190537495677611/source _original_basename=tmpug09hiw6 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:03:54 np0005532762.novalocal sudo[6804]: pam_unix(sudo:session): session closed for user root
Nov 23 20:03:54 np0005532762.novalocal sudo[6855]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fplyivrodjptleaagmlbkjqtpujzqrmx ; /usr/bin/python3'
Nov 23 20:03:54 np0005532762.novalocal sudo[6855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:03:54 np0005532762.novalocal python3[6857]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-4746-eccf-00000000001f-1-compute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:03:54 np0005532762.novalocal sudo[6855]: pam_unix(sudo:session): session closed for user root
Nov 23 20:03:55 np0005532762.novalocal python3[6885]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-4746-eccf-000000000020-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 23 20:03:57 np0005532762.novalocal python3[6913]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:04:16 np0005532762.novalocal sudo[6937]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtebxfaclkekkxschutmenxssqhikwar ; /usr/bin/python3'
Nov 23 20:04:16 np0005532762.novalocal sudo[6937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:04:16 np0005532762.novalocal python3[6939]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:04:16 np0005532762.novalocal sudo[6937]: pam_unix(sudo:session): session closed for user root
Nov 23 20:04:18 np0005532762.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 23 20:05:16 np0005532762.novalocal sshd-session[4192]: Received disconnect from 38.102.83.114 port 56970:11: disconnected by user
Nov 23 20:05:16 np0005532762.novalocal sshd-session[4192]: Disconnected from user zuul 38.102.83.114 port 56970
Nov 23 20:05:16 np0005532762.novalocal sshd-session[4142]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:05:16 np0005532762.novalocal systemd-logind[793]: Session 1 logged out. Waiting for processes to exit.
Nov 23 20:05:23 np0005532762.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 23 20:05:23 np0005532762.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 23 20:05:23 np0005532762.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 23 20:05:23 np0005532762.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 23 20:05:23 np0005532762.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 23 20:05:23 np0005532762.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 23 20:05:23 np0005532762.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 23 20:05:23 np0005532762.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 23 20:05:23 np0005532762.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 23 20:05:23 np0005532762.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 23 20:05:23 np0005532762.novalocal NetworkManager[856]: <info>  [1763928323.2552] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 23 20:05:23 np0005532762.novalocal systemd-udevd[6943]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 20:05:23 np0005532762.novalocal NetworkManager[856]: <info>  [1763928323.2802] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 20:05:23 np0005532762.novalocal NetworkManager[856]: <info>  [1763928323.2841] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 23 20:05:23 np0005532762.novalocal NetworkManager[856]: <info>  [1763928323.2848] device (eth1): carrier: link connected
Nov 23 20:05:23 np0005532762.novalocal NetworkManager[856]: <info>  [1763928323.2851] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 23 20:05:23 np0005532762.novalocal NetworkManager[856]: <info>  [1763928323.2862] policy: auto-activating connection 'Wired connection 1' (b8d72197-27ea-3e22-9d94-94c7806ccb0f)
Nov 23 20:05:23 np0005532762.novalocal NetworkManager[856]: <info>  [1763928323.2869] device (eth1): Activation: starting connection 'Wired connection 1' (b8d72197-27ea-3e22-9d94-94c7806ccb0f)
Nov 23 20:05:23 np0005532762.novalocal NetworkManager[856]: <info>  [1763928323.2870] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 20:05:23 np0005532762.novalocal NetworkManager[856]: <info>  [1763928323.2874] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 20:05:23 np0005532762.novalocal NetworkManager[856]: <info>  [1763928323.2881] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 20:05:23 np0005532762.novalocal NetworkManager[856]: <info>  [1763928323.2888] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 23 20:05:23 np0005532762.novalocal systemd[4152]: Starting Mark boot as successful...
Nov 23 20:05:23 np0005532762.novalocal systemd[4152]: Finished Mark boot as successful.
Nov 23 20:05:23 np0005532762.novalocal sshd-session[6947]: Accepted publickey for zuul from 38.102.83.114 port 38774 ssh2: RSA SHA256:vJMLYQFuuPNw0oBlCMsukcLw8e8jDo/ucmylbroLweU
Nov 23 20:05:23 np0005532762.novalocal systemd-logind[793]: New session 3 of user zuul.
Nov 23 20:05:23 np0005532762.novalocal systemd[1]: Started Session 3 of User zuul.
Nov 23 20:05:23 np0005532762.novalocal sshd-session[6947]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:05:24 np0005532762.novalocal python3[6974]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-f412-6632-00000000018f-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:05:34 np0005532762.novalocal sudo[7052]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tedhniqvpmwuovjkmdnkwvbolaumooct ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 23 20:05:34 np0005532762.novalocal sudo[7052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:05:34 np0005532762.novalocal python3[7054]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 20:05:34 np0005532762.novalocal sudo[7052]: pam_unix(sudo:session): session closed for user root
Nov 23 20:05:34 np0005532762.novalocal sudo[7125]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swadgvtsgisxvbuasdznqshrasifkchb ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 23 20:05:34 np0005532762.novalocal sudo[7125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:05:34 np0005532762.novalocal python3[7127]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763928333.9925566-155-212175523479482/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=1a02ea7f5b2269dc33b0d44c617e69d144a93207 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:05:34 np0005532762.novalocal sudo[7125]: pam_unix(sudo:session): session closed for user root
Nov 23 20:05:34 np0005532762.novalocal sudo[7175]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnqfxmlstnbwovqrylfdhyjfsenmpclb ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 23 20:05:34 np0005532762.novalocal sudo[7175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:05:35 np0005532762.novalocal python3[7177]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 20:05:35 np0005532762.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 23 20:05:35 np0005532762.novalocal systemd[1]: Stopped Network Manager Wait Online.
Nov 23 20:05:35 np0005532762.novalocal systemd[1]: Stopping Network Manager Wait Online...
Nov 23 20:05:35 np0005532762.novalocal systemd[1]: Stopping Network Manager...
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[856]: <info>  [1763928335.2308] caught SIGTERM, shutting down normally.
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[856]: <info>  [1763928335.2323] dhcp4 (eth0): canceled DHCP transaction
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[856]: <info>  [1763928335.2323] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[856]: <info>  [1763928335.2323] dhcp4 (eth0): state changed no lease
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[856]: <info>  [1763928335.2326] manager: NetworkManager state is now CONNECTING
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[856]: <info>  [1763928335.2469] dhcp4 (eth1): canceled DHCP transaction
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[856]: <info>  [1763928335.2469] dhcp4 (eth1): state changed no lease
Nov 23 20:05:35 np0005532762.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[856]: <info>  [1763928335.2531] exiting (success)
Nov 23 20:05:35 np0005532762.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 23 20:05:35 np0005532762.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 23 20:05:35 np0005532762.novalocal systemd[1]: Stopped Network Manager.
Nov 23 20:05:35 np0005532762.novalocal systemd[1]: NetworkManager.service: Consumed 1.208s CPU time, 10.0M memory peak.
Nov 23 20:05:35 np0005532762.novalocal systemd[1]: Starting Network Manager...
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.3317] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:6edcf464-8554-408a-ba56-0bae3cf8aec4)
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.3318] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.3393] manager[0x560c24eeb070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 23 20:05:35 np0005532762.novalocal systemd[1]: Starting Hostname Service...
Nov 23 20:05:35 np0005532762.novalocal systemd[1]: Started Hostname Service.
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4520] hostname: hostname: using hostnamed
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4521] hostname: static hostname changed from (none) to "np0005532762.novalocal"
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4526] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4531] manager[0x560c24eeb070]: rfkill: Wi-Fi hardware radio set enabled
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4532] manager[0x560c24eeb070]: rfkill: WWAN hardware radio set enabled
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4558] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4559] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4560] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4560] manager: Networking is enabled by state file
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4563] settings: Loaded settings plugin: keyfile (internal)
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4566] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4594] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4604] dhcp: init: Using DHCP client 'internal'
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4606] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4613] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4620] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4628] device (lo): Activation: starting connection 'lo' (170402d3-84eb-4bc9-a75c-092c5ddf07e9)
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4635] device (eth0): carrier: link connected
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4639] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4646] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4647] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4656] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4663] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4669] device (eth1): carrier: link connected
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4673] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4679] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (b8d72197-27ea-3e22-9d94-94c7806ccb0f) (indicated)
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4680] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4686] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4693] device (eth1): Activation: starting connection 'Wired connection 1' (b8d72197-27ea-3e22-9d94-94c7806ccb0f)
Nov 23 20:05:35 np0005532762.novalocal systemd[1]: Started Network Manager.
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4699] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4706] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4712] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4715] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4719] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4726] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4730] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4735] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4740] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4748] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4752] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4761] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4764] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4778] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4783] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.4791] device (lo): Activation: successful, device activated.
Nov 23 20:05:35 np0005532762.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 23 20:05:35 np0005532762.novalocal sudo[7175]: pam_unix(sudo:session): session closed for user root
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.6572] dhcp4 (eth0): state changed new lease, address=38.102.83.106
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.6585] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.6658] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.6694] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.6696] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.6700] manager: NetworkManager state is now CONNECTED_SITE
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.6704] device (eth0): Activation: successful, device activated.
Nov 23 20:05:35 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928335.6711] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 23 20:05:35 np0005532762.novalocal python3[7244]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-f412-6632-0000000000c8-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:05:45 np0005532762.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 23 20:06:05 np0005532762.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 23 20:06:20 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928380.3723] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 23 20:06:20 np0005532762.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 23 20:06:20 np0005532762.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 23 20:06:20 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928380.3949] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 23 20:06:20 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928380.3952] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 23 20:06:20 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928380.3961] device (eth1): Activation: successful, device activated.
Nov 23 20:06:20 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928380.3968] manager: startup complete
Nov 23 20:06:20 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928380.3970] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 23 20:06:20 np0005532762.novalocal NetworkManager[7191]: <warn>  [1763928380.3975] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 23 20:06:20 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928380.3982] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 23 20:06:20 np0005532762.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 23 20:06:20 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928380.4099] dhcp4 (eth1): canceled DHCP transaction
Nov 23 20:06:20 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928380.4102] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 23 20:06:20 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928380.4103] dhcp4 (eth1): state changed no lease
Nov 23 20:06:20 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928380.4120] policy: auto-activating connection 'ci-private-network' (c8f28de1-00ce-5ad5-b1e7-36e35b879f57)
Nov 23 20:06:20 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928380.4125] device (eth1): Activation: starting connection 'ci-private-network' (c8f28de1-00ce-5ad5-b1e7-36e35b879f57)
Nov 23 20:06:20 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928380.4127] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 20:06:20 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928380.4130] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 20:06:20 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928380.4137] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 20:06:20 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928380.4146] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 20:06:20 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928380.4201] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 20:06:20 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928380.4205] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 20:06:20 np0005532762.novalocal NetworkManager[7191]: <info>  [1763928380.4212] device (eth1): Activation: successful, device activated.
Nov 23 20:06:30 np0005532762.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 23 20:06:35 np0005532762.novalocal sshd-session[6950]: Received disconnect from 38.102.83.114 port 38774:11: disconnected by user
Nov 23 20:06:35 np0005532762.novalocal sshd-session[6950]: Disconnected from user zuul 38.102.83.114 port 38774
Nov 23 20:06:35 np0005532762.novalocal sshd-session[6947]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:06:35 np0005532762.novalocal systemd-logind[793]: Session 3 logged out. Waiting for processes to exit.
Nov 23 20:06:35 np0005532762.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Nov 23 20:06:35 np0005532762.novalocal systemd[1]: session-3.scope: Consumed 1.650s CPU time.
Nov 23 20:06:35 np0005532762.novalocal systemd-logind[793]: Removed session 3.
Nov 23 20:07:10 np0005532762.novalocal sshd-session[7290]: Accepted publickey for zuul from 38.102.83.114 port 46236 ssh2: RSA SHA256:vJMLYQFuuPNw0oBlCMsukcLw8e8jDo/ucmylbroLweU
Nov 23 20:07:10 np0005532762.novalocal systemd-logind[793]: New session 4 of user zuul.
Nov 23 20:07:10 np0005532762.novalocal systemd[1]: Started Session 4 of User zuul.
Nov 23 20:07:10 np0005532762.novalocal sshd-session[7290]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:07:11 np0005532762.novalocal sudo[7369]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csvamzxrbyuaxzpnagsipiafpedxhqqg ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 23 20:07:11 np0005532762.novalocal sudo[7369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:07:11 np0005532762.novalocal python3[7371]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 20:07:11 np0005532762.novalocal sudo[7369]: pam_unix(sudo:session): session closed for user root
Nov 23 20:07:11 np0005532762.novalocal sudo[7442]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbbxnedsuduuzolofoinxcxxqhnjvubu ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 23 20:07:11 np0005532762.novalocal sudo[7442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:07:11 np0005532762.novalocal python3[7444]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763928431.0703604-373-246174899525340/source _original_basename=tmprnfgy_rq follow=False checksum=3134bd1d03fba929119b03a893a690ab48d9a2ea backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:07:11 np0005532762.novalocal sudo[7442]: pam_unix(sudo:session): session closed for user root
Nov 23 20:07:14 np0005532762.novalocal sshd-session[7293]: Connection closed by 38.102.83.114 port 46236
Nov 23 20:07:14 np0005532762.novalocal sshd-session[7290]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:07:14 np0005532762.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Nov 23 20:07:14 np0005532762.novalocal systemd-logind[793]: Session 4 logged out. Waiting for processes to exit.
Nov 23 20:07:14 np0005532762.novalocal systemd-logind[793]: Removed session 4.
Nov 23 20:08:41 np0005532762.novalocal systemd[4152]: Created slice User Background Tasks Slice.
Nov 23 20:08:41 np0005532762.novalocal systemd[4152]: Starting Cleanup of User's Temporary Files and Directories...
Nov 23 20:08:41 np0005532762.novalocal systemd[4152]: Finished Cleanup of User's Temporary Files and Directories.
Nov 23 20:10:57 np0005532762.novalocal sshd-session[7474]: Invalid user Administrator from 185.156.73.233 port 34924
Nov 23 20:10:57 np0005532762.novalocal sshd-session[7474]: Connection closed by invalid user Administrator 185.156.73.233 port 34924 [preauth]
Nov 23 20:11:12 np0005532762.novalocal sshd-session[7477]: error: kex_exchange_identification: read: Connection reset by peer
Nov 23 20:11:12 np0005532762.novalocal sshd-session[7477]: Connection reset by 45.140.17.97 port 4949
Nov 23 20:11:58 np0005532762.novalocal sshd-session[7478]: Connection closed by 92.118.39.92 port 54642
Nov 23 20:12:18 np0005532762.novalocal sshd-session[7480]: Accepted publickey for zuul from 38.102.83.114 port 46222 ssh2: RSA SHA256:vJMLYQFuuPNw0oBlCMsukcLw8e8jDo/ucmylbroLweU
Nov 23 20:12:18 np0005532762.novalocal systemd-logind[793]: New session 5 of user zuul.
Nov 23 20:12:18 np0005532762.novalocal systemd[1]: Started Session 5 of User zuul.
Nov 23 20:12:18 np0005532762.novalocal sshd-session[7480]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:12:18 np0005532762.novalocal sudo[7507]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afpgjfdquuhkyujofvimatehecwtdhgu ; /usr/bin/python3'
Nov 23 20:12:18 np0005532762.novalocal sudo[7507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:12:18 np0005532762.novalocal python3[7509]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-bee1-1da1-000000001cd8-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:12:18 np0005532762.novalocal sudo[7507]: pam_unix(sudo:session): session closed for user root
Nov 23 20:12:19 np0005532762.novalocal sudo[7536]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixmyofwenrmjneqjotzjccperepxqmxk ; /usr/bin/python3'
Nov 23 20:12:19 np0005532762.novalocal sudo[7536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:12:19 np0005532762.novalocal python3[7538]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:12:19 np0005532762.novalocal sudo[7536]: pam_unix(sudo:session): session closed for user root
Nov 23 20:12:19 np0005532762.novalocal sudo[7562]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmlgoloiaqflgyxcmsmrpnmxvqpsynbk ; /usr/bin/python3'
Nov 23 20:12:19 np0005532762.novalocal sudo[7562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:12:19 np0005532762.novalocal python3[7564]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:12:19 np0005532762.novalocal sudo[7562]: pam_unix(sudo:session): session closed for user root
Nov 23 20:12:19 np0005532762.novalocal sudo[7588]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqkcsttqcznpuhobhpifzmvwrmpawxwn ; /usr/bin/python3'
Nov 23 20:12:19 np0005532762.novalocal sudo[7588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:12:19 np0005532762.novalocal python3[7590]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:12:19 np0005532762.novalocal sudo[7588]: pam_unix(sudo:session): session closed for user root
Nov 23 20:12:19 np0005532762.novalocal sudo[7614]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmdfjgfynvrparqrgvlydwruprkeqjkt ; /usr/bin/python3'
Nov 23 20:12:19 np0005532762.novalocal sudo[7614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:12:20 np0005532762.novalocal python3[7616]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:12:20 np0005532762.novalocal sudo[7614]: pam_unix(sudo:session): session closed for user root
Nov 23 20:12:20 np0005532762.novalocal sudo[7640]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkdnkvmmzskfqpiztakmmlvfasobnoau ; /usr/bin/python3'
Nov 23 20:12:20 np0005532762.novalocal sudo[7640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:12:20 np0005532762.novalocal python3[7642]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:12:20 np0005532762.novalocal sudo[7640]: pam_unix(sudo:session): session closed for user root
Nov 23 20:12:21 np0005532762.novalocal sudo[7718]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpzjygafdpjdgomeuslzbrufztamrasx ; /usr/bin/python3'
Nov 23 20:12:21 np0005532762.novalocal sudo[7718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:12:21 np0005532762.novalocal python3[7720]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 20:12:21 np0005532762.novalocal sudo[7718]: pam_unix(sudo:session): session closed for user root
Nov 23 20:12:21 np0005532762.novalocal sudo[7791]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmfolnlwarsvjuoofggdchzmncewccaa ; /usr/bin/python3'
Nov 23 20:12:21 np0005532762.novalocal sudo[7791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:12:21 np0005532762.novalocal python3[7793]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763928741.0655618-511-277801935526292/source _original_basename=tmpnxpk74nv follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:12:21 np0005532762.novalocal sudo[7791]: pam_unix(sudo:session): session closed for user root
Nov 23 20:12:22 np0005532762.novalocal sudo[7841]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljbutnusswugdhxwomfmniyhwaupcetr ; /usr/bin/python3'
Nov 23 20:12:22 np0005532762.novalocal sudo[7841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:12:22 np0005532762.novalocal python3[7843]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 20:12:22 np0005532762.novalocal systemd[1]: Reloading.
Nov 23 20:12:22 np0005532762.novalocal systemd-rc-local-generator[7862]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:12:22 np0005532762.novalocal sudo[7841]: pam_unix(sudo:session): session closed for user root
Nov 23 20:12:24 np0005532762.novalocal sudo[7897]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxoheffdnzlciidoidmcmohbawifdnzq ; /usr/bin/python3'
Nov 23 20:12:24 np0005532762.novalocal sudo[7897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:12:24 np0005532762.novalocal python3[7899]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 23 20:12:24 np0005532762.novalocal sudo[7897]: pam_unix(sudo:session): session closed for user root
Nov 23 20:12:24 np0005532762.novalocal sudo[7923]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljjkzhejyhqomcpqwmvijjbqbeykcyox ; /usr/bin/python3'
Nov 23 20:12:24 np0005532762.novalocal sudo[7923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:12:24 np0005532762.novalocal python3[7925]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:12:24 np0005532762.novalocal sudo[7923]: pam_unix(sudo:session): session closed for user root
Nov 23 20:12:25 np0005532762.novalocal sudo[7951]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upbefteqpxhaegocilsxbhpnejgnfpyn ; /usr/bin/python3'
Nov 23 20:12:25 np0005532762.novalocal sudo[7951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:12:25 np0005532762.novalocal python3[7953]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:12:25 np0005532762.novalocal sudo[7951]: pam_unix(sudo:session): session closed for user root
Nov 23 20:12:25 np0005532762.novalocal sudo[7979]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gulygwozprngtlohhzxpcyyixasuckpy ; /usr/bin/python3'
Nov 23 20:12:25 np0005532762.novalocal sudo[7979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:12:25 np0005532762.novalocal python3[7981]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:12:25 np0005532762.novalocal sudo[7979]: pam_unix(sudo:session): session closed for user root
Nov 23 20:12:25 np0005532762.novalocal sudo[8007]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geqleggzmfsrzungayjmruahrsejmmwc ; /usr/bin/python3'
Nov 23 20:12:25 np0005532762.novalocal sudo[8007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:12:25 np0005532762.novalocal python3[8009]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:12:25 np0005532762.novalocal sudo[8007]: pam_unix(sudo:session): session closed for user root
Nov 23 20:12:26 np0005532762.novalocal python3[8036]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-bee1-1da1-000000001cdf-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:12:26 np0005532762.novalocal python3[8066]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 20:12:29 np0005532762.novalocal sshd-session[7483]: Connection closed by 38.102.83.114 port 46222
Nov 23 20:12:29 np0005532762.novalocal sshd-session[7480]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:12:29 np0005532762.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Nov 23 20:12:29 np0005532762.novalocal systemd[1]: session-5.scope: Consumed 4.110s CPU time.
Nov 23 20:12:29 np0005532762.novalocal systemd-logind[793]: Session 5 logged out. Waiting for processes to exit.
Nov 23 20:12:29 np0005532762.novalocal systemd-logind[793]: Removed session 5.
Nov 23 20:12:31 np0005532762.novalocal sshd-session[8071]: Accepted publickey for zuul from 38.102.83.114 port 35476 ssh2: RSA SHA256:vJMLYQFuuPNw0oBlCMsukcLw8e8jDo/ucmylbroLweU
Nov 23 20:12:31 np0005532762.novalocal systemd-logind[793]: New session 6 of user zuul.
Nov 23 20:12:31 np0005532762.novalocal irqbalance[786]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 23 20:12:31 np0005532762.novalocal irqbalance[786]: IRQ 27 affinity is now unmanaged
Nov 23 20:12:31 np0005532762.novalocal systemd[1]: Started Session 6 of User zuul.
Nov 23 20:12:31 np0005532762.novalocal sshd-session[8071]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:12:31 np0005532762.novalocal sudo[8098]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-janqkspqzvnhcxahlctdplkvlhcrbdsr ; /usr/bin/python3'
Nov 23 20:12:31 np0005532762.novalocal sudo[8098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:12:31 np0005532762.novalocal python3[8100]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 23 20:12:46 np0005532762.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 23 20:12:46 np0005532762.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 20:12:46 np0005532762.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 23 20:12:46 np0005532762.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 20:12:46 np0005532762.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 23 20:12:46 np0005532762.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 20:12:46 np0005532762.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 20:12:46 np0005532762.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 20:12:55 np0005532762.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 23 20:12:55 np0005532762.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 20:12:55 np0005532762.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 23 20:12:55 np0005532762.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 20:12:55 np0005532762.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 23 20:12:55 np0005532762.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 20:12:55 np0005532762.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 20:12:55 np0005532762.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 20:13:04 np0005532762.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 23 20:13:04 np0005532762.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 20:13:04 np0005532762.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 23 20:13:04 np0005532762.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 20:13:04 np0005532762.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 23 20:13:04 np0005532762.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 20:13:04 np0005532762.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 20:13:04 np0005532762.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 20:13:06 np0005532762.novalocal setsebool[8165]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 23 20:13:06 np0005532762.novalocal setsebool[8165]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 23 20:13:17 np0005532762.novalocal kernel: SELinux:  Converting 388 SID table entries...
Nov 23 20:13:17 np0005532762.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 20:13:17 np0005532762.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 23 20:13:17 np0005532762.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 20:13:17 np0005532762.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 23 20:13:17 np0005532762.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 20:13:17 np0005532762.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 20:13:17 np0005532762.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 20:13:40 np0005532762.novalocal dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 23 20:13:40 np0005532762.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 20:13:40 np0005532762.novalocal systemd[1]: Starting man-db-cache-update.service...
Nov 23 20:13:40 np0005532762.novalocal systemd[1]: Reloading.
Nov 23 20:13:40 np0005532762.novalocal systemd-rc-local-generator[8921]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:13:40 np0005532762.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 20:13:41 np0005532762.novalocal sudo[8098]: pam_unix(sudo:session): session closed for user root
Nov 23 20:13:46 np0005532762.novalocal python3[12718]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163e3b-3c83-ba7b-575b-00000000000c-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:13:47 np0005532762.novalocal kernel: evm: overlay not supported
Nov 23 20:13:47 np0005532762.novalocal systemd[4152]: Starting D-Bus User Message Bus...
Nov 23 20:13:47 np0005532762.novalocal dbus-broker-launch[13572]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 23 20:13:47 np0005532762.novalocal dbus-broker-launch[13572]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 23 20:13:47 np0005532762.novalocal systemd[4152]: Started D-Bus User Message Bus.
Nov 23 20:13:47 np0005532762.novalocal dbus-broker-lau[13572]: Ready
Nov 23 20:13:47 np0005532762.novalocal systemd[4152]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 23 20:13:47 np0005532762.novalocal systemd[4152]: Created slice Slice /user.
Nov 23 20:13:47 np0005532762.novalocal systemd[4152]: podman-13446.scope: unit configures an IP firewall, but not running as root.
Nov 23 20:13:47 np0005532762.novalocal systemd[4152]: (This warning is only shown for the first unit using IP firewalling.)
Nov 23 20:13:47 np0005532762.novalocal systemd[4152]: Started podman-13446.scope.
Nov 23 20:13:47 np0005532762.novalocal systemd[4152]: Started podman-pause-fb01b08e.scope.
Nov 23 20:13:47 np0005532762.novalocal sshd-session[8074]: Connection closed by 38.102.83.114 port 35476
Nov 23 20:13:47 np0005532762.novalocal sshd-session[8071]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:13:47 np0005532762.novalocal systemd[1]: session-6.scope: Deactivated successfully.
Nov 23 20:13:47 np0005532762.novalocal systemd[1]: session-6.scope: Consumed 59.860s CPU time.
Nov 23 20:13:47 np0005532762.novalocal systemd-logind[793]: Session 6 logged out. Waiting for processes to exit.
Nov 23 20:13:47 np0005532762.novalocal systemd-logind[793]: Removed session 6.
Nov 23 20:14:02 np0005532762.novalocal sshd-session[18937]: Connection closed by 38.102.83.13 port 50410 [preauth]
Nov 23 20:14:02 np0005532762.novalocal sshd-session[18943]: Connection closed by 38.102.83.13 port 50424 [preauth]
Nov 23 20:14:02 np0005532762.novalocal sshd-session[18945]: Unable to negotiate with 38.102.83.13 port 50428: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 23 20:14:02 np0005532762.novalocal sshd-session[18939]: Unable to negotiate with 38.102.83.13 port 50444: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 23 20:14:02 np0005532762.novalocal sshd-session[18940]: Unable to negotiate with 38.102.83.13 port 50446: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 23 20:14:07 np0005532762.novalocal sshd-session[20287]: Accepted publickey for zuul from 38.102.83.114 port 43934 ssh2: RSA SHA256:vJMLYQFuuPNw0oBlCMsukcLw8e8jDo/ucmylbroLweU
Nov 23 20:14:07 np0005532762.novalocal systemd-logind[793]: New session 7 of user zuul.
Nov 23 20:14:07 np0005532762.novalocal systemd[1]: Started Session 7 of User zuul.
Nov 23 20:14:07 np0005532762.novalocal sshd-session[20287]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:14:08 np0005532762.novalocal python3[20389]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA87KGYjjoyogEDAuKEHrB6Oxv3mIvu13bhzDbjQjrNyl3D2q3szz508Yk2UHZaBKDHJbLxThWYWGwZpHtr+UTo= zuul@np0005532760.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:14:08 np0005532762.novalocal sudo[20563]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uggwxxuedgpmutvgoyxjswlpkraggktn ; /usr/bin/python3'
Nov 23 20:14:08 np0005532762.novalocal sudo[20563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:14:08 np0005532762.novalocal python3[20576]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA87KGYjjoyogEDAuKEHrB6Oxv3mIvu13bhzDbjQjrNyl3D2q3szz508Yk2UHZaBKDHJbLxThWYWGwZpHtr+UTo= zuul@np0005532760.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:14:08 np0005532762.novalocal sudo[20563]: pam_unix(sudo:session): session closed for user root
Nov 23 20:14:09 np0005532762.novalocal sudo[20963]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wosydmdfehelohjnaudvplccdhvxjcfi ; /usr/bin/python3'
Nov 23 20:14:09 np0005532762.novalocal sudo[20963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:14:09 np0005532762.novalocal python3[20969]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005532762.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 23 20:14:09 np0005532762.novalocal useradd[21039]: new group: name=cloud-admin, GID=1002
Nov 23 20:14:09 np0005532762.novalocal useradd[21039]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Nov 23 20:14:09 np0005532762.novalocal sudo[20963]: pam_unix(sudo:session): session closed for user root
Nov 23 20:14:09 np0005532762.novalocal sudo[21182]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfoeoptagcpdkbccuiojjaystlvlupkn ; /usr/bin/python3'
Nov 23 20:14:09 np0005532762.novalocal sudo[21182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:14:10 np0005532762.novalocal python3[21193]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA87KGYjjoyogEDAuKEHrB6Oxv3mIvu13bhzDbjQjrNyl3D2q3szz508Yk2UHZaBKDHJbLxThWYWGwZpHtr+UTo= zuul@np0005532760.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 20:14:10 np0005532762.novalocal sudo[21182]: pam_unix(sudo:session): session closed for user root
Nov 23 20:14:10 np0005532762.novalocal sudo[21432]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmxbavkqgftlugtyfxrwrlugrfkndics ; /usr/bin/python3'
Nov 23 20:14:10 np0005532762.novalocal sudo[21432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:14:10 np0005532762.novalocal python3[21439]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 20:14:10 np0005532762.novalocal sudo[21432]: pam_unix(sudo:session): session closed for user root
Nov 23 20:14:10 np0005532762.novalocal sudo[21672]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igqpmxwmbiimlameoslnherujnpypkpe ; /usr/bin/python3'
Nov 23 20:14:10 np0005532762.novalocal sudo[21672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:14:10 np0005532762.novalocal python3[21679]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763928850.225103-151-2060321674992/source _original_basename=tmpmdj0othu follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:14:10 np0005532762.novalocal sudo[21672]: pam_unix(sudo:session): session closed for user root
Nov 23 20:14:11 np0005532762.novalocal sudo[22006]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqopkunsfzmdgtqggckvvawwthtbgnbm ; /usr/bin/python3'
Nov 23 20:14:11 np0005532762.novalocal sudo[22006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:14:11 np0005532762.novalocal python3[22012]: ansible-ansible.builtin.hostname Invoked with name=compute-1 use=systemd
Nov 23 20:14:11 np0005532762.novalocal systemd[1]: Starting Hostname Service...
Nov 23 20:14:11 np0005532762.novalocal systemd[1]: Started Hostname Service.
Nov 23 20:14:11 np0005532762.novalocal systemd-hostnamed[22100]: Changed pretty hostname to 'compute-1'
Nov 23 20:14:11 compute-1 systemd-hostnamed[22100]: Hostname set to <compute-1> (static)
Nov 23 20:14:11 compute-1 NetworkManager[7191]: <info>  [1763928851.9481] hostname: static hostname changed from "np0005532762.novalocal" to "compute-1"
Nov 23 20:14:11 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 23 20:14:11 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 23 20:14:12 compute-1 sudo[22006]: pam_unix(sudo:session): session closed for user root
Nov 23 20:14:12 compute-1 sshd-session[20329]: Connection closed by 38.102.83.114 port 43934
Nov 23 20:14:12 compute-1 sshd-session[20287]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:14:12 compute-1 systemd[1]: session-7.scope: Deactivated successfully.
Nov 23 20:14:12 compute-1 systemd[1]: session-7.scope: Consumed 2.273s CPU time.
Nov 23 20:14:12 compute-1 systemd-logind[793]: Session 7 logged out. Waiting for processes to exit.
Nov 23 20:14:12 compute-1 systemd-logind[793]: Removed session 7.
Nov 23 20:14:21 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 23 20:14:39 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 20:14:39 compute-1 systemd[1]: Finished man-db-cache-update.service.
Nov 23 20:14:39 compute-1 systemd[1]: man-db-cache-update.service: Consumed 1min 3.859s CPU time.
Nov 23 20:14:39 compute-1 systemd[1]: run-r98f26c7c34084d02b10991c9a52bb160.service: Deactivated successfully.
Nov 23 20:14:41 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 23 20:15:31 compute-1 sshd-session[29895]: Invalid user ubuntu from 92.118.39.92 port 40082
Nov 23 20:15:31 compute-1 sshd-session[29895]: Connection closed by invalid user ubuntu 92.118.39.92 port 40082 [preauth]
Nov 23 20:16:31 compute-1 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 23 20:16:31 compute-1 sshd-session[29899]: Invalid user unknown from 65.20.143.159 port 49488
Nov 23 20:16:31 compute-1 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 23 20:16:31 compute-1 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 23 20:16:31 compute-1 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 23 20:16:31 compute-1 sshd-session[29899]: Connection closed by invalid user unknown 65.20.143.159 port 49488 [preauth]
Nov 23 20:18:11 compute-1 sshd-session[29904]: Accepted publickey for zuul from 38.102.83.13 port 51724 ssh2: RSA SHA256:vJMLYQFuuPNw0oBlCMsukcLw8e8jDo/ucmylbroLweU
Nov 23 20:18:11 compute-1 systemd-logind[793]: New session 8 of user zuul.
Nov 23 20:18:12 compute-1 systemd[1]: Started Session 8 of User zuul.
Nov 23 20:18:12 compute-1 sshd-session[29904]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:18:12 compute-1 python3[29980]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:18:14 compute-1 sudo[30094]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccixaxpuwhntcpyfywstmzqgbvujkefi ; /usr/bin/python3'
Nov 23 20:18:14 compute-1 sudo[30094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:18:14 compute-1 python3[30096]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 20:18:14 compute-1 sudo[30094]: pam_unix(sudo:session): session closed for user root
Nov 23 20:18:14 compute-1 sudo[30167]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpkbtyrtmtuegguryidjjypklzlpenha ; /usr/bin/python3'
Nov 23 20:18:14 compute-1 sudo[30167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:18:14 compute-1 python3[30169]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763929094.2987156-33976-272682233161012/source mode=0755 _original_basename=delorean.repo follow=False checksum=1830be8248976a7f714fb01ca8550e92dfc79ad2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:18:15 compute-1 sudo[30167]: pam_unix(sudo:session): session closed for user root
Nov 23 20:18:15 compute-1 sudo[30193]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhiuarrentxogzquqybkmjigrqacupob ; /usr/bin/python3'
Nov 23 20:18:15 compute-1 sudo[30193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:18:15 compute-1 python3[30195]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 20:18:15 compute-1 sudo[30193]: pam_unix(sudo:session): session closed for user root
Nov 23 20:18:15 compute-1 sudo[30266]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wujazzfddyhnxtnzrskcwqtqzhkbyybr ; /usr/bin/python3'
Nov 23 20:18:15 compute-1 sudo[30266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:18:15 compute-1 python3[30268]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763929094.2987156-33976-272682233161012/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:18:15 compute-1 sudo[30266]: pam_unix(sudo:session): session closed for user root
Nov 23 20:18:15 compute-1 sudo[30292]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvazvzewrulpamnccvjioefcazpervwe ; /usr/bin/python3'
Nov 23 20:18:15 compute-1 sudo[30292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:18:15 compute-1 python3[30294]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 20:18:15 compute-1 sudo[30292]: pam_unix(sudo:session): session closed for user root
Nov 23 20:18:16 compute-1 sudo[30365]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mssmraadkahfydkdamimzfwisejwsfiq ; /usr/bin/python3'
Nov 23 20:18:16 compute-1 sudo[30365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:18:16 compute-1 python3[30367]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763929094.2987156-33976-272682233161012/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:18:16 compute-1 sudo[30365]: pam_unix(sudo:session): session closed for user root
Nov 23 20:18:16 compute-1 sudo[30391]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jypjaqhpppukbwrznihowchbfggpczas ; /usr/bin/python3'
Nov 23 20:18:16 compute-1 sudo[30391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:18:16 compute-1 python3[30393]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 20:18:16 compute-1 sudo[30391]: pam_unix(sudo:session): session closed for user root
Nov 23 20:18:16 compute-1 sudo[30464]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvlhcuierrkbzcdcwgwcmtsodnqpfaqo ; /usr/bin/python3'
Nov 23 20:18:16 compute-1 sudo[30464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:18:17 compute-1 python3[30466]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763929094.2987156-33976-272682233161012/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:18:17 compute-1 sudo[30464]: pam_unix(sudo:session): session closed for user root
Nov 23 20:18:17 compute-1 sudo[30490]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjkqbzmqtftykunilvquizgaxsjjqklj ; /usr/bin/python3'
Nov 23 20:18:17 compute-1 sudo[30490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:18:17 compute-1 python3[30492]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 20:18:17 compute-1 sudo[30490]: pam_unix(sudo:session): session closed for user root
Nov 23 20:18:17 compute-1 sudo[30563]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfbjzwgtwiczzxedcqtxesjkytrsqokz ; /usr/bin/python3'
Nov 23 20:18:17 compute-1 sudo[30563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:18:17 compute-1 python3[30565]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763929094.2987156-33976-272682233161012/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:18:17 compute-1 sudo[30563]: pam_unix(sudo:session): session closed for user root
Nov 23 20:18:17 compute-1 sudo[30589]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcykmzlkbwxyjfgylcyoviztzdbojzpm ; /usr/bin/python3'
Nov 23 20:18:17 compute-1 sudo[30589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:18:17 compute-1 python3[30591]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 20:18:17 compute-1 sudo[30589]: pam_unix(sudo:session): session closed for user root
Nov 23 20:18:18 compute-1 sudo[30662]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akbpgnzipgtspulcsvxcvvekrjlswgte ; /usr/bin/python3'
Nov 23 20:18:18 compute-1 sudo[30662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:18:18 compute-1 python3[30664]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763929094.2987156-33976-272682233161012/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:18:18 compute-1 sudo[30662]: pam_unix(sudo:session): session closed for user root
Nov 23 20:18:18 compute-1 sudo[30688]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvyaubemafywwkwblvkjiufrvrxstzdx ; /usr/bin/python3'
Nov 23 20:18:18 compute-1 sudo[30688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:18:18 compute-1 python3[30690]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 20:18:18 compute-1 sudo[30688]: pam_unix(sudo:session): session closed for user root
Nov 23 20:18:18 compute-1 sudo[30761]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nswngennqxikfvfxkuwqnsgbjdmkwjjf ; /usr/bin/python3'
Nov 23 20:18:18 compute-1 sudo[30761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:18:18 compute-1 python3[30763]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763929094.2987156-33976-272682233161012/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:18:18 compute-1 sudo[30761]: pam_unix(sudo:session): session closed for user root
Nov 23 20:18:22 compute-1 sshd-session[30788]: Received disconnect from 43.225.142.116 port 47624:11: Bye Bye [preauth]
Nov 23 20:18:22 compute-1 sshd-session[30788]: Disconnected from authenticating user root 43.225.142.116 port 47624 [preauth]
Nov 23 20:18:23 compute-1 sshd-session[30790]: Received disconnect from 102.176.81.29 port 51708:11: Bye Bye [preauth]
Nov 23 20:18:23 compute-1 sshd-session[30790]: Disconnected from authenticating user root 102.176.81.29 port 51708 [preauth]
Nov 23 20:18:31 compute-1 python3[30816]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:18:32 compute-1 sshd-session[30818]: Invalid user smart from 34.91.0.68 port 54354
Nov 23 20:18:32 compute-1 sshd-session[30818]: Received disconnect from 34.91.0.68 port 54354:11: Bye Bye [preauth]
Nov 23 20:18:32 compute-1 sshd-session[30818]: Disconnected from invalid user smart 34.91.0.68 port 54354 [preauth]
Nov 23 20:18:45 compute-1 sshd-session[30820]: Invalid user sol from 92.118.39.92 port 53626
Nov 23 20:18:45 compute-1 sshd-session[30820]: Connection closed by invalid user sol 92.118.39.92 port 53626 [preauth]
Nov 23 20:19:01 compute-1 sshd[1005]: Timeout before authentication for connection from 182.44.80.242 to 38.102.83.106, pid = 29903
Nov 23 20:19:30 compute-1 sshd-session[30822]: Invalid user ekp from 43.225.142.116 port 44536
Nov 23 20:19:30 compute-1 sshd-session[30822]: Received disconnect from 43.225.142.116 port 44536:11: Bye Bye [preauth]
Nov 23 20:19:30 compute-1 sshd-session[30822]: Disconnected from invalid user ekp 43.225.142.116 port 44536 [preauth]
Nov 23 20:19:39 compute-1 sshd-session[30824]: Invalid user sysadmin from 118.145.189.160 port 54372
Nov 23 20:19:39 compute-1 sshd-session[30824]: Received disconnect from 118.145.189.160 port 54372:11: Bye Bye [preauth]
Nov 23 20:19:39 compute-1 sshd-session[30824]: Disconnected from invalid user sysadmin 118.145.189.160 port 54372 [preauth]
Nov 23 20:19:47 compute-1 sshd-session[30826]: Received disconnect from 34.91.0.68 port 57096:11: Bye Bye [preauth]
Nov 23 20:19:47 compute-1 sshd-session[30826]: Disconnected from authenticating user root 34.91.0.68 port 57096 [preauth]
Nov 23 20:19:52 compute-1 sshd-session[30829]: Invalid user q from 213.169.44.220 port 43346
Nov 23 20:19:52 compute-1 sshd-session[30829]: Received disconnect from 213.169.44.220 port 43346:11: Bye Bye [preauth]
Nov 23 20:19:52 compute-1 sshd-session[30829]: Disconnected from invalid user q 213.169.44.220 port 43346 [preauth]
Nov 23 20:20:19 compute-1 sshd-session[30831]: Invalid user min from 102.176.81.29 port 59502
Nov 23 20:20:19 compute-1 sshd-session[30831]: Received disconnect from 102.176.81.29 port 59502:11: Bye Bye [preauth]
Nov 23 20:20:19 compute-1 sshd-session[30831]: Disconnected from invalid user min 102.176.81.29 port 59502 [preauth]
Nov 23 20:20:34 compute-1 sshd-session[30833]: Invalid user server from 43.225.142.116 port 40672
Nov 23 20:20:34 compute-1 sshd-session[30833]: Received disconnect from 43.225.142.116 port 40672:11: Bye Bye [preauth]
Nov 23 20:20:34 compute-1 sshd-session[30833]: Disconnected from invalid user server 43.225.142.116 port 40672 [preauth]
Nov 23 20:20:59 compute-1 sshd-session[30836]: Invalid user halo from 34.91.0.68 port 59082
Nov 23 20:20:59 compute-1 sshd-session[30836]: Received disconnect from 34.91.0.68 port 59082:11: Bye Bye [preauth]
Nov 23 20:20:59 compute-1 sshd-session[30836]: Disconnected from invalid user halo 34.91.0.68 port 59082 [preauth]
Nov 23 20:21:19 compute-1 sshd-session[30838]: Invalid user test from 118.145.189.160 port 55856
Nov 23 20:21:19 compute-1 sshd-session[30838]: Received disconnect from 118.145.189.160 port 55856:11: Bye Bye [preauth]
Nov 23 20:21:19 compute-1 sshd-session[30838]: Disconnected from invalid user test 118.145.189.160 port 55856 [preauth]
Nov 23 20:21:35 compute-1 sshd-session[30841]: Invalid user cat from 43.225.142.116 port 36800
Nov 23 20:21:35 compute-1 sshd-session[30841]: Received disconnect from 43.225.142.116 port 36800:11: Bye Bye [preauth]
Nov 23 20:21:35 compute-1 sshd-session[30841]: Disconnected from invalid user cat 43.225.142.116 port 36800 [preauth]
Nov 23 20:21:46 compute-1 sshd-session[30843]: Received disconnect from 102.176.81.29 port 33780:11: Bye Bye [preauth]
Nov 23 20:21:46 compute-1 sshd-session[30843]: Disconnected from authenticating user root 102.176.81.29 port 33780 [preauth]
Nov 23 20:22:03 compute-1 sshd-session[30845]: Received disconnect from 34.91.0.68 port 32824:11: Bye Bye [preauth]
Nov 23 20:22:03 compute-1 sshd-session[30845]: Disconnected from authenticating user root 34.91.0.68 port 32824 [preauth]
Nov 23 20:22:07 compute-1 sshd-session[30847]: Invalid user solana from 92.118.39.92 port 38918
Nov 23 20:22:07 compute-1 sshd-session[30847]: Connection closed by invalid user solana 92.118.39.92 port 38918 [preauth]
Nov 23 20:22:16 compute-1 sshd-session[30849]: Connection closed by authenticating user root 185.156.73.233 port 43478 [preauth]
Nov 23 20:22:37 compute-1 sshd-session[30851]: Received disconnect from 43.225.142.116 port 32930:11: Bye Bye [preauth]
Nov 23 20:22:37 compute-1 sshd-session[30851]: Disconnected from authenticating user root 43.225.142.116 port 32930 [preauth]
Nov 23 20:23:05 compute-1 sshd-session[30853]: Invalid user lucas from 34.91.0.68 port 34800
Nov 23 20:23:05 compute-1 sshd-session[30853]: Received disconnect from 34.91.0.68 port 34800:11: Bye Bye [preauth]
Nov 23 20:23:05 compute-1 sshd-session[30853]: Disconnected from invalid user lucas 34.91.0.68 port 34800 [preauth]
Nov 23 20:23:10 compute-1 sshd-session[30855]: Invalid user web from 102.176.81.29 port 36248
Nov 23 20:23:10 compute-1 sshd-session[30855]: Received disconnect from 102.176.81.29 port 36248:11: Bye Bye [preauth]
Nov 23 20:23:10 compute-1 sshd-session[30855]: Disconnected from invalid user web 102.176.81.29 port 36248 [preauth]
Nov 23 20:23:20 compute-1 sshd[1005]: Timeout before authentication for connection from 220.164.39.21 to 38.102.83.106, pid = 30840
Nov 23 20:23:30 compute-1 sshd-session[29907]: Received disconnect from 38.102.83.13 port 51724:11: disconnected by user
Nov 23 20:23:30 compute-1 sshd-session[29907]: Disconnected from user zuul 38.102.83.13 port 51724
Nov 23 20:23:30 compute-1 sshd-session[29904]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:23:30 compute-1 systemd[1]: session-8.scope: Deactivated successfully.
Nov 23 20:23:30 compute-1 systemd[1]: session-8.scope: Consumed 5.035s CPU time.
Nov 23 20:23:30 compute-1 systemd-logind[793]: Session 8 logged out. Waiting for processes to exit.
Nov 23 20:23:30 compute-1 systemd-logind[793]: Removed session 8.
Nov 23 20:23:37 compute-1 sshd-session[30857]: Invalid user jose from 43.225.142.116 port 57286
Nov 23 20:23:37 compute-1 sshd-session[30857]: Received disconnect from 43.225.142.116 port 57286:11: Bye Bye [preauth]
Nov 23 20:23:37 compute-1 sshd-session[30857]: Disconnected from invalid user jose 43.225.142.116 port 57286 [preauth]
Nov 23 20:24:05 compute-1 sshd-session[30860]: Invalid user victor from 34.91.0.68 port 36774
Nov 23 20:24:05 compute-1 sshd-session[30860]: Received disconnect from 34.91.0.68 port 36774:11: Bye Bye [preauth]
Nov 23 20:24:05 compute-1 sshd-session[30860]: Disconnected from invalid user victor 34.91.0.68 port 36774 [preauth]
Nov 23 20:24:29 compute-1 sshd-session[30862]: Invalid user local from 102.176.81.29 port 38678
Nov 23 20:24:29 compute-1 sshd-session[30862]: Received disconnect from 102.176.81.29 port 38678:11: Bye Bye [preauth]
Nov 23 20:24:29 compute-1 sshd-session[30862]: Disconnected from invalid user local 102.176.81.29 port 38678 [preauth]
Nov 23 20:24:39 compute-1 sshd-session[30864]: Received disconnect from 43.225.142.116 port 53416:11: Bye Bye [preauth]
Nov 23 20:24:39 compute-1 sshd-session[30864]: Disconnected from authenticating user root 43.225.142.116 port 53416 [preauth]
Nov 23 20:25:07 compute-1 systemd[1]: Starting dnf makecache...
Nov 23 20:25:07 compute-1 sshd-session[30866]: Invalid user node from 92.118.39.92 port 52468
Nov 23 20:25:07 compute-1 sshd-session[30866]: Connection closed by invalid user node 92.118.39.92 port 52468 [preauth]
Nov 23 20:25:07 compute-1 dnf[30868]: Failed determining last makecache time.
Nov 23 20:25:07 compute-1 dnf[30868]: delorean-openstack-barbican-42b4c41831408a8e323 144 kB/s |  13 kB     00:00
Nov 23 20:25:07 compute-1 dnf[30868]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 1.6 MB/s |  65 kB     00:00
Nov 23 20:25:07 compute-1 dnf[30868]: delorean-openstack-cinder-1c00d6490d88e436f26ef 960 kB/s |  32 kB     00:00
Nov 23 20:25:07 compute-1 dnf[30868]: delorean-python-stevedore-c4acc5639fd2329372142 4.4 MB/s | 131 kB     00:00
Nov 23 20:25:07 compute-1 dnf[30868]: delorean-python-observabilityclient-2f31846d73c 362 kB/s |  25 kB     00:00
Nov 23 20:25:08 compute-1 dnf[30868]: delorean-os-net-config-bbae2ed8a159b0435a473f38 1.7 MB/s | 356 kB     00:00
Nov 23 20:25:08 compute-1 dnf[30868]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 470 kB/s |  42 kB     00:00
Nov 23 20:25:08 compute-1 dnf[30868]: delorean-python-designate-tests-tempest-347fdbc 314 kB/s |  18 kB     00:00
Nov 23 20:25:08 compute-1 dnf[30868]: delorean-openstack-glance-1fd12c29b339f30fe823e 395 kB/s |  18 kB     00:00
Nov 23 20:25:08 compute-1 dnf[30868]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 1.0 MB/s |  29 kB     00:00
Nov 23 20:25:08 compute-1 dnf[30868]: delorean-openstack-manila-3c01b7181572c95dac462 944 kB/s |  25 kB     00:00
Nov 23 20:25:08 compute-1 dnf[30868]: delorean-python-whitebox-neutron-tests-tempest- 4.4 MB/s | 154 kB     00:00
Nov 23 20:25:08 compute-1 dnf[30868]: delorean-openstack-octavia-ba397f07a7331190208c 444 kB/s |  26 kB     00:00
Nov 23 20:25:08 compute-1 dnf[30868]: delorean-openstack-watcher-c014f81a8647287f6dcc 191 kB/s |  16 kB     00:00
Nov 23 20:25:08 compute-1 dnf[30868]: delorean-python-tcib-1124124ec06aadbac34f0d340b  76 kB/s | 7.4 kB     00:00
Nov 23 20:25:09 compute-1 dnf[30868]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 652 kB/s | 144 kB     00:00
Nov 23 20:25:09 compute-1 dnf[30868]: delorean-openstack-swift-dc98a8463506ac520c469a 167 kB/s |  14 kB     00:00
Nov 23 20:25:09 compute-1 dnf[30868]: delorean-python-tempestconf-8515371b7cceebd4282 1.9 MB/s |  53 kB     00:00
Nov 23 20:25:09 compute-1 dnf[30868]: delorean-openstack-heat-ui-013accbfd179753bc3f0 3.4 MB/s |  96 kB     00:00
Nov 23 20:25:09 compute-1 dnf[30868]: CentOS Stream 9 - BaseOS                         53 kB/s | 7.3 kB     00:00
Nov 23 20:25:09 compute-1 dnf[30868]: CentOS Stream 9 - AppStream                      72 kB/s | 7.4 kB     00:00
Nov 23 20:25:09 compute-1 dnf[30868]: CentOS Stream 9 - CRB                            78 kB/s | 7.2 kB     00:00
Nov 23 20:25:10 compute-1 dnf[30868]: CentOS Stream 9 - Extras packages                72 kB/s | 8.3 kB     00:00
Nov 23 20:25:10 compute-1 dnf[30868]: dlrn-antelope-testing                            28 MB/s | 1.1 MB     00:00
Nov 23 20:25:10 compute-1 sshd-session[30917]: Invalid user deamon from 118.145.189.160 port 36248
Nov 23 20:25:10 compute-1 sshd-session[30917]: Received disconnect from 118.145.189.160 port 36248:11: Bye Bye [preauth]
Nov 23 20:25:10 compute-1 sshd-session[30917]: Disconnected from invalid user deamon 118.145.189.160 port 36248 [preauth]
Nov 23 20:25:10 compute-1 dnf[30868]: dlrn-antelope-build-deps                         14 MB/s | 461 kB     00:00
Nov 23 20:25:10 compute-1 dnf[30868]: centos9-rabbitmq                                9.4 MB/s | 123 kB     00:00
Nov 23 20:25:10 compute-1 dnf[30868]: centos9-storage                                  23 MB/s | 415 kB     00:00
Nov 23 20:25:10 compute-1 dnf[30868]: centos9-opstools                                4.4 MB/s |  51 kB     00:00
Nov 23 20:25:11 compute-1 dnf[30868]: NFV SIG OpenvSwitch                              27 MB/s | 454 kB     00:00
Nov 23 20:25:11 compute-1 sshd-session[30938]: Invalid user vpn from 34.91.0.68 port 38748
Nov 23 20:25:11 compute-1 sshd-session[30938]: Received disconnect from 34.91.0.68 port 38748:11: Bye Bye [preauth]
Nov 23 20:25:11 compute-1 sshd-session[30938]: Disconnected from invalid user vpn 34.91.0.68 port 38748 [preauth]
Nov 23 20:25:11 compute-1 dnf[30868]: repo-setup-centos-appstream                      66 MB/s |  25 MB     00:00
Nov 23 20:25:17 compute-1 dnf[30868]: repo-setup-centos-baseos                         69 MB/s | 8.8 MB     00:00
Nov 23 20:25:18 compute-1 dnf[30868]: repo-setup-centos-highavailability               18 MB/s | 744 kB     00:00
Nov 23 20:25:19 compute-1 dnf[30868]: repo-setup-centos-powertools                     36 MB/s | 7.3 MB     00:00
Nov 23 20:25:22 compute-1 dnf[30868]: Extra Packages for Enterprise Linux 9 - x86_64   17 MB/s |  20 MB     00:01
Nov 23 20:25:35 compute-1 dnf[30868]: Metadata cache created.
Nov 23 20:25:35 compute-1 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 23 20:25:35 compute-1 systemd[1]: Finished dnf makecache.
Nov 23 20:25:35 compute-1 systemd[1]: dnf-makecache.service: Consumed 24.147s CPU time.
Nov 23 20:25:42 compute-1 sshd-session[30974]: Invalid user sysadmin from 43.225.142.116 port 49546
Nov 23 20:25:42 compute-1 sshd-session[30974]: Received disconnect from 43.225.142.116 port 49546:11: Bye Bye [preauth]
Nov 23 20:25:42 compute-1 sshd-session[30974]: Disconnected from invalid user sysadmin 43.225.142.116 port 49546 [preauth]
Nov 23 20:25:48 compute-1 sshd-session[30976]: Invalid user usuario from 102.176.81.29 port 41108
Nov 23 20:25:49 compute-1 sshd-session[30976]: Received disconnect from 102.176.81.29 port 41108:11: Bye Bye [preauth]
Nov 23 20:25:49 compute-1 sshd-session[30976]: Disconnected from invalid user usuario 102.176.81.29 port 41108 [preauth]
Nov 23 20:26:18 compute-1 sshd-session[30978]: Invalid user bob from 34.91.0.68 port 40722
Nov 23 20:26:18 compute-1 sshd-session[30978]: Received disconnect from 34.91.0.68 port 40722:11: Bye Bye [preauth]
Nov 23 20:26:18 compute-1 sshd-session[30978]: Disconnected from invalid user bob 34.91.0.68 port 40722 [preauth]
Nov 23 20:26:46 compute-1 sshd-session[30980]: Invalid user testuser from 43.225.142.116 port 45672
Nov 23 20:26:46 compute-1 sshd-session[30980]: Received disconnect from 43.225.142.116 port 45672:11: Bye Bye [preauth]
Nov 23 20:26:46 compute-1 sshd-session[30980]: Disconnected from invalid user testuser 43.225.142.116 port 45672 [preauth]
Nov 23 20:27:13 compute-1 sshd-session[30983]: Invalid user kevin from 102.176.81.29 port 43590
Nov 23 20:27:14 compute-1 sshd-session[30983]: Received disconnect from 102.176.81.29 port 43590:11: Bye Bye [preauth]
Nov 23 20:27:14 compute-1 sshd-session[30983]: Disconnected from invalid user kevin 102.176.81.29 port 43590 [preauth]
Nov 23 20:27:20 compute-1 sshd-session[30985]: Connection closed by 95.53.245.92 port 45655
Nov 23 20:27:21 compute-1 sshd-session[30986]: Invalid user default from 34.126.114.239 port 35966
Nov 23 20:27:21 compute-1 sshd-session[30986]: Connection closed by invalid user default 34.126.114.239 port 35966 [preauth]
Nov 23 20:27:24 compute-1 sshd-session[30988]: Received disconnect from 34.91.0.68 port 42704:11: Bye Bye [preauth]
Nov 23 20:27:24 compute-1 sshd-session[30988]: Disconnected from authenticating user root 34.91.0.68 port 42704 [preauth]
Nov 23 20:27:47 compute-1 sshd-session[30991]: Invalid user usuario from 118.145.189.160 port 33016
Nov 23 20:27:47 compute-1 sshd-session[30991]: Received disconnect from 118.145.189.160 port 33016:11: Bye Bye [preauth]
Nov 23 20:27:47 compute-1 sshd-session[30991]: Disconnected from invalid user usuario 118.145.189.160 port 33016 [preauth]
Nov 23 20:27:52 compute-1 sshd-session[30993]: Received disconnect from 43.225.142.116 port 41802:11: Bye Bye [preauth]
Nov 23 20:27:52 compute-1 sshd-session[30993]: Disconnected from authenticating user root 43.225.142.116 port 41802 [preauth]
Nov 23 20:28:13 compute-1 sshd-session[30995]: Invalid user validator from 92.118.39.92 port 37776
Nov 23 20:28:13 compute-1 sshd-session[30995]: Connection closed by invalid user validator 92.118.39.92 port 37776 [preauth]
Nov 23 20:28:28 compute-1 sshd-session[30997]: Received disconnect from 34.91.0.68 port 44684:11: Bye Bye [preauth]
Nov 23 20:28:28 compute-1 sshd-session[30997]: Disconnected from authenticating user root 34.91.0.68 port 44684 [preauth]
Nov 23 20:28:34 compute-1 sshd-session[30999]: Received disconnect from 102.176.81.29 port 46098:11: Bye Bye [preauth]
Nov 23 20:28:34 compute-1 sshd-session[30999]: Disconnected from authenticating user root 102.176.81.29 port 46098 [preauth]
Nov 23 20:28:56 compute-1 sshd-session[31001]: Invalid user deamon from 43.225.142.116 port 37932
Nov 23 20:28:56 compute-1 sshd-session[31001]: Received disconnect from 43.225.142.116 port 37932:11: Bye Bye [preauth]
Nov 23 20:28:56 compute-1 sshd-session[31001]: Disconnected from invalid user deamon 43.225.142.116 port 37932 [preauth]
Nov 23 20:29:08 compute-1 sshd-session[31003]: Invalid user web from 118.145.189.160 port 51832
Nov 23 20:29:08 compute-1 sshd-session[31003]: Received disconnect from 118.145.189.160 port 51832:11: Bye Bye [preauth]
Nov 23 20:29:08 compute-1 sshd-session[31003]: Disconnected from invalid user web 118.145.189.160 port 51832 [preauth]
Nov 23 20:29:29 compute-1 sshd-session[31006]: Invalid user es from 34.91.0.68 port 46654
Nov 23 20:29:29 compute-1 sshd-session[31006]: Received disconnect from 34.91.0.68 port 46654:11: Bye Bye [preauth]
Nov 23 20:29:29 compute-1 sshd-session[31006]: Disconnected from invalid user es 34.91.0.68 port 46654 [preauth]
Nov 23 20:29:41 compute-1 sshd-session[31008]: Accepted publickey for zuul from 192.168.122.30 port 40966 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 20:29:41 compute-1 systemd-logind[793]: New session 9 of user zuul.
Nov 23 20:29:41 compute-1 systemd[1]: Started Session 9 of User zuul.
Nov 23 20:29:41 compute-1 sshd-session[31008]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:29:42 compute-1 python3.9[31161]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:29:43 compute-1 sudo[31340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdkomwxerixhjqkhtolvrhjssckepbrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929783.094631-57-40446758079275/AnsiballZ_command.py'
Nov 23 20:29:43 compute-1 sudo[31340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:29:43 compute-1 python3.9[31342]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:29:51 compute-1 sudo[31340]: pam_unix(sudo:session): session closed for user root
Nov 23 20:29:51 compute-1 sshd-session[31011]: Connection closed by 192.168.122.30 port 40966
Nov 23 20:29:51 compute-1 sshd-session[31008]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:29:51 compute-1 systemd[1]: session-9.scope: Deactivated successfully.
Nov 23 20:29:51 compute-1 systemd[1]: session-9.scope: Consumed 8.338s CPU time.
Nov 23 20:29:51 compute-1 systemd-logind[793]: Session 9 logged out. Waiting for processes to exit.
Nov 23 20:29:51 compute-1 systemd-logind[793]: Removed session 9.
Nov 23 20:29:52 compute-1 sshd-session[31382]: Invalid user deamon from 102.176.81.29 port 48554
Nov 23 20:29:52 compute-1 sshd-session[31382]: Received disconnect from 102.176.81.29 port 48554:11: Bye Bye [preauth]
Nov 23 20:29:52 compute-1 sshd-session[31382]: Disconnected from invalid user deamon 102.176.81.29 port 48554 [preauth]
Nov 23 20:30:00 compute-1 sshd-session[31401]: Received disconnect from 43.225.142.116 port 34058:11: Bye Bye [preauth]
Nov 23 20:30:00 compute-1 sshd-session[31401]: Disconnected from authenticating user root 43.225.142.116 port 34058 [preauth]
Nov 23 20:30:06 compute-1 sshd-session[31403]: Accepted publickey for zuul from 192.168.122.30 port 47650 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 20:30:06 compute-1 systemd-logind[793]: New session 10 of user zuul.
Nov 23 20:30:06 compute-1 systemd[1]: Started Session 10 of User zuul.
Nov 23 20:30:06 compute-1 sshd-session[31403]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:30:07 compute-1 python3.9[31556]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 23 20:30:09 compute-1 python3.9[31730]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:30:10 compute-1 sudo[31880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lniyxmbjbmezpdtnmjnbffxzjckjyass ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929809.8398898-94-190412076914774/AnsiballZ_command.py'
Nov 23 20:30:10 compute-1 sudo[31880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:30:10 compute-1 python3.9[31882]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:30:10 compute-1 sudo[31880]: pam_unix(sudo:session): session closed for user root
Nov 23 20:30:11 compute-1 sudo[32033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixcthgwenvnkjucsewknerdrlqzwohye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929810.920094-130-59940773289897/AnsiballZ_stat.py'
Nov 23 20:30:11 compute-1 sudo[32033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:30:11 compute-1 irqbalance[786]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 23 20:30:11 compute-1 irqbalance[786]: IRQ 26 affinity is now unmanaged
Nov 23 20:30:11 compute-1 python3.9[32035]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:30:11 compute-1 sudo[32033]: pam_unix(sudo:session): session closed for user root
Nov 23 20:30:12 compute-1 sudo[32185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igweahnikboxfuuwgnpjqyyxoruotsff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929811.844237-154-127046694702376/AnsiballZ_file.py'
Nov 23 20:30:12 compute-1 sudo[32185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:30:12 compute-1 python3.9[32187]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:30:12 compute-1 sudo[32185]: pam_unix(sudo:session): session closed for user root
Nov 23 20:30:13 compute-1 sudo[32337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hemeaaxxosbhafddoalftrjalrxstvth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929812.8966084-178-220138245458415/AnsiballZ_stat.py'
Nov 23 20:30:13 compute-1 sudo[32337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:30:13 compute-1 python3.9[32339]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:30:13 compute-1 sudo[32337]: pam_unix(sudo:session): session closed for user root
Nov 23 20:30:14 compute-1 sudo[32460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oppkqsqlmfvpavbjdkcnkmvbnhodbcom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929812.8966084-178-220138245458415/AnsiballZ_copy.py'
Nov 23 20:30:14 compute-1 sudo[32460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:30:14 compute-1 python3.9[32462]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1763929812.8966084-178-220138245458415/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:30:14 compute-1 sudo[32460]: pam_unix(sudo:session): session closed for user root
Nov 23 20:30:14 compute-1 sudo[32612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thaaktcrwxxsmsbynurllhkbaibwfynd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929814.4846416-223-127596750299510/AnsiballZ_setup.py'
Nov 23 20:30:14 compute-1 sudo[32612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:30:15 compute-1 python3.9[32614]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:30:15 compute-1 sudo[32612]: pam_unix(sudo:session): session closed for user root
Nov 23 20:30:15 compute-1 sudo[32768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vktwwbommbnmquvexvzykmvkxgsgbbcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929815.5805428-247-44474496003289/AnsiballZ_file.py'
Nov 23 20:30:15 compute-1 sudo[32768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:30:16 compute-1 python3.9[32770]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:30:16 compute-1 sudo[32768]: pam_unix(sudo:session): session closed for user root
Nov 23 20:30:16 compute-1 sudo[32920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppxmtxzxfoorutjklwgukdmvxgaqhiem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929816.5408335-274-238715122165177/AnsiballZ_file.py'
Nov 23 20:30:16 compute-1 sudo[32920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:30:17 compute-1 python3.9[32922]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:30:17 compute-1 sudo[32920]: pam_unix(sudo:session): session closed for user root
Nov 23 20:30:17 compute-1 python3.9[33072]: ansible-ansible.builtin.service_facts Invoked
Nov 23 20:30:23 compute-1 python3.9[33325]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:30:24 compute-1 python3.9[33475]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:30:25 compute-1 python3.9[33629]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:30:26 compute-1 sudo[33785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulatkdvclevgpkfklktwnjshxcqprurm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929826.1127453-418-272579842927339/AnsiballZ_setup.py'
Nov 23 20:30:26 compute-1 sudo[33785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:30:26 compute-1 python3.9[33787]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 20:30:26 compute-1 sudo[33785]: pam_unix(sudo:session): session closed for user root
Nov 23 20:30:27 compute-1 sudo[33869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddauwonmrvqpkymfziowkahqnixcyzzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929826.1127453-418-272579842927339/AnsiballZ_dnf.py'
Nov 23 20:30:27 compute-1 sudo[33869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:30:27 compute-1 python3.9[33871]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 20:30:30 compute-1 sshd-session[33925]: Invalid user ts3 from 34.91.0.68 port 48628
Nov 23 20:30:31 compute-1 sshd-session[33925]: Received disconnect from 34.91.0.68 port 48628:11: Bye Bye [preauth]
Nov 23 20:30:31 compute-1 sshd-session[33925]: Disconnected from invalid user ts3 34.91.0.68 port 48628 [preauth]
Nov 23 20:30:31 compute-1 sshd-session[33937]: Connection closed by 161.35.179.103 port 50552
Nov 23 20:31:04 compute-1 sshd-session[34014]: Invalid user min from 43.225.142.116 port 58422
Nov 23 20:31:05 compute-1 sshd-session[34014]: Received disconnect from 43.225.142.116 port 58422:11: Bye Bye [preauth]
Nov 23 20:31:05 compute-1 sshd-session[34014]: Disconnected from invalid user min 43.225.142.116 port 58422 [preauth]
Nov 23 20:31:11 compute-1 systemd[1]: Reloading.
Nov 23 20:31:11 compute-1 systemd-rc-local-generator[34069]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:31:11 compute-1 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 23 20:31:12 compute-1 systemd[1]: Reloading.
Nov 23 20:31:12 compute-1 systemd-rc-local-generator[34106]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:31:12 compute-1 sshd-session[34043]: Invalid user ekp from 102.176.81.29 port 51078
Nov 23 20:31:12 compute-1 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 23 20:31:12 compute-1 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 23 20:31:12 compute-1 systemd[1]: Reloading.
Nov 23 20:31:12 compute-1 systemd-rc-local-generator[34153]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:31:12 compute-1 sshd-session[34043]: Received disconnect from 102.176.81.29 port 51078:11: Bye Bye [preauth]
Nov 23 20:31:12 compute-1 sshd-session[34043]: Disconnected from invalid user ekp 102.176.81.29 port 51078 [preauth]
Nov 23 20:31:12 compute-1 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 23 20:31:13 compute-1 dbus-broker-launch[755]: Noticed file-system modification, trigger reload.
Nov 23 20:31:13 compute-1 dbus-broker-launch[755]: Noticed file-system modification, trigger reload.
Nov 23 20:31:13 compute-1 dbus-broker-launch[755]: Noticed file-system modification, trigger reload.
Nov 23 20:31:18 compute-1 sshd-session[34188]: Invalid user polkadot from 92.118.39.92 port 51286
Nov 23 20:31:18 compute-1 sshd-session[34188]: Connection closed by invalid user polkadot 92.118.39.92 port 51286 [preauth]
Nov 23 20:31:37 compute-1 sshd-session[34242]: Received disconnect from 34.91.0.68 port 50606:11: Bye Bye [preauth]
Nov 23 20:31:37 compute-1 sshd-session[34242]: Disconnected from authenticating user root 34.91.0.68 port 50606 [preauth]
Nov 23 20:32:11 compute-1 sshd-session[34333]: Invalid user test from 43.225.142.116 port 54556
Nov 23 20:32:11 compute-1 sshd-session[34333]: Received disconnect from 43.225.142.116 port 54556:11: Bye Bye [preauth]
Nov 23 20:32:11 compute-1 sshd-session[34333]: Disconnected from invalid user test 43.225.142.116 port 54556 [preauth]
Nov 23 20:32:27 compute-1 kernel: SELinux:  Converting 2718 SID table entries...
Nov 23 20:32:27 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 20:32:27 compute-1 kernel: SELinux:  policy capability open_perms=1
Nov 23 20:32:27 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 20:32:27 compute-1 kernel: SELinux:  policy capability always_check_network=0
Nov 23 20:32:27 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 20:32:27 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 20:32:27 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 20:32:27 compute-1 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 23 20:32:27 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 20:32:27 compute-1 systemd[1]: Starting man-db-cache-update.service...
Nov 23 20:32:27 compute-1 systemd[1]: Reloading.
Nov 23 20:32:27 compute-1 systemd-rc-local-generator[34477]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:32:27 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 20:32:28 compute-1 sudo[33869]: pam_unix(sudo:session): session closed for user root
Nov 23 20:32:28 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 20:32:28 compute-1 systemd[1]: Finished man-db-cache-update.service.
Nov 23 20:32:28 compute-1 systemd[1]: run-re06584b61c12435db70c4ebec33d82b7.service: Deactivated successfully.
Nov 23 20:32:28 compute-1 sudo[35389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijqmeypdztzoulrftctxqvwlpbkdxzmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929948.5565813-455-145910579613513/AnsiballZ_command.py'
Nov 23 20:32:28 compute-1 sudo[35389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:32:28 compute-1 python3.9[35391]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:32:29 compute-1 sudo[35389]: pam_unix(sudo:session): session closed for user root
Nov 23 20:32:31 compute-1 sudo[35670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjdqmwakneohkzvttagnjcvjvzsyplif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929950.5147727-478-41410817923559/AnsiballZ_selinux.py'
Nov 23 20:32:31 compute-1 sudo[35670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:32:31 compute-1 python3.9[35672]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 23 20:32:31 compute-1 sudo[35670]: pam_unix(sudo:session): session closed for user root
Nov 23 20:32:32 compute-1 sudo[35822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ullyhgxvcxtenbymwfhhqdtcojlobetu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929951.8777895-511-64987000655116/AnsiballZ_command.py'
Nov 23 20:32:32 compute-1 sudo[35822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:32:32 compute-1 python3.9[35824]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 23 20:32:33 compute-1 sudo[35822]: pam_unix(sudo:session): session closed for user root
Nov 23 20:32:33 compute-1 sshd-session[35826]: Invalid user test from 102.176.81.29 port 53560
Nov 23 20:32:33 compute-1 sshd-session[35826]: Received disconnect from 102.176.81.29 port 53560:11: Bye Bye [preauth]
Nov 23 20:32:33 compute-1 sshd-session[35826]: Disconnected from invalid user test 102.176.81.29 port 53560 [preauth]
Nov 23 20:32:34 compute-1 sudo[35977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abhbhjouastitenpgkksvqbwqyzktyrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929953.820485-535-180229542421967/AnsiballZ_file.py'
Nov 23 20:32:34 compute-1 sudo[35977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:32:36 compute-1 python3.9[35979]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:32:36 compute-1 sudo[35977]: pam_unix(sudo:session): session closed for user root
Nov 23 20:32:37 compute-1 sudo[36130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovzwvoxphinbbyvivilmqjswtmxjoqey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929957.4380774-560-157687204583021/AnsiballZ_mount.py'
Nov 23 20:32:37 compute-1 sudo[36130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:32:38 compute-1 python3.9[36132]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 23 20:32:38 compute-1 sudo[36130]: pam_unix(sudo:session): session closed for user root
Nov 23 20:32:39 compute-1 sudo[36283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzzalfutgozdzsomxiyfqzkoqgwqpcwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929959.4060473-643-27921610636216/AnsiballZ_file.py'
Nov 23 20:32:39 compute-1 sudo[36283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:32:44 compute-1 python3.9[36285]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:32:44 compute-1 sudo[36283]: pam_unix(sudo:session): session closed for user root
Nov 23 20:32:47 compute-1 sudo[36435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khjmqpimeetfzyfxgzhwrgyevkiduhmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929966.6912267-667-126697307908575/AnsiballZ_stat.py'
Nov 23 20:32:47 compute-1 sudo[36435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:32:47 compute-1 python3.9[36437]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:32:47 compute-1 sudo[36435]: pam_unix(sudo:session): session closed for user root
Nov 23 20:32:47 compute-1 sudo[36558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oybbflimpxdnesjhlpnbseupirxgeonj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929966.6912267-667-126697307908575/AnsiballZ_copy.py'
Nov 23 20:32:47 compute-1 sudo[36558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:32:47 compute-1 python3.9[36560]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763929966.6912267-667-126697307908575/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=848940549ac5db80ec615963c7c09743939a62fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:32:47 compute-1 sudo[36558]: pam_unix(sudo:session): session closed for user root
Nov 23 20:32:48 compute-1 sudo[36712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtgouiggqvdmjizgvffmgvzeaydcwztj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929968.6272306-739-181851492844011/AnsiballZ_stat.py'
Nov 23 20:32:48 compute-1 sudo[36712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:32:49 compute-1 python3.9[36714]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:32:49 compute-1 sudo[36712]: pam_unix(sudo:session): session closed for user root
Nov 23 20:32:49 compute-1 sshd-session[36585]: Received disconnect from 34.91.0.68 port 52590:11: Bye Bye [preauth]
Nov 23 20:32:49 compute-1 sshd-session[36585]: Disconnected from authenticating user root 34.91.0.68 port 52590 [preauth]
Nov 23 20:32:49 compute-1 sudo[36864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktsrrdqnlktesawdvuuxnimmwxgurwbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929969.4012818-763-137155525972604/AnsiballZ_command.py'
Nov 23 20:32:49 compute-1 sudo[36864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:32:49 compute-1 python3.9[36866]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:32:49 compute-1 sudo[36864]: pam_unix(sudo:session): session closed for user root
Nov 23 20:32:50 compute-1 sudo[37017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgnnzizeqaxnuanqoaaxlstpxidyozhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929970.2877276-787-209910575008511/AnsiballZ_file.py'
Nov 23 20:32:50 compute-1 sudo[37017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:32:50 compute-1 python3.9[37019]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:32:50 compute-1 sudo[37017]: pam_unix(sudo:session): session closed for user root
Nov 23 20:32:51 compute-1 sudo[37169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fadkgbtoohrtawayhdycozncxecnlbot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929971.4096334-820-262084755922976/AnsiballZ_getent.py'
Nov 23 20:32:51 compute-1 sudo[37169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:32:52 compute-1 python3.9[37171]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 23 20:32:52 compute-1 sudo[37169]: pam_unix(sudo:session): session closed for user root
Nov 23 20:32:52 compute-1 sudo[37322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjoeqsnggwnzkomriztzcxqarapmdzjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929972.3626006-844-105784488741915/AnsiballZ_group.py'
Nov 23 20:32:52 compute-1 sudo[37322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:32:53 compute-1 python3.9[37324]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 23 20:32:53 compute-1 groupadd[37325]: group added to /etc/group: name=qemu, GID=107
Nov 23 20:32:53 compute-1 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 20:32:53 compute-1 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 20:32:53 compute-1 groupadd[37325]: group added to /etc/gshadow: name=qemu
Nov 23 20:32:53 compute-1 groupadd[37325]: new group: name=qemu, GID=107
Nov 23 20:32:53 compute-1 sudo[37322]: pam_unix(sudo:session): session closed for user root
Nov 23 20:32:53 compute-1 sudo[37481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spehfsaneacxbwwbiuqvozdgggyilmad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929973.3886561-868-189581920560898/AnsiballZ_user.py'
Nov 23 20:32:53 compute-1 sudo[37481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:32:54 compute-1 python3.9[37483]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 23 20:32:54 compute-1 useradd[37485]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Nov 23 20:32:54 compute-1 sudo[37481]: pam_unix(sudo:session): session closed for user root
Nov 23 20:32:54 compute-1 sudo[37641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqwaegcvsrgtfrnxavpskzothypfglpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929974.5097258-892-102684942979437/AnsiballZ_getent.py'
Nov 23 20:32:54 compute-1 sudo[37641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:32:55 compute-1 python3.9[37643]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 23 20:32:55 compute-1 sudo[37641]: pam_unix(sudo:session): session closed for user root
Nov 23 20:32:55 compute-1 sudo[37794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyxncvcjklwtceufuxjuxeobbwchwsgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929975.4213645-916-243735422594837/AnsiballZ_group.py'
Nov 23 20:32:55 compute-1 sudo[37794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:32:55 compute-1 python3.9[37796]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 23 20:32:56 compute-1 groupadd[37797]: group added to /etc/group: name=hugetlbfs, GID=42477
Nov 23 20:32:56 compute-1 groupadd[37797]: group added to /etc/gshadow: name=hugetlbfs
Nov 23 20:32:56 compute-1 groupadd[37797]: new group: name=hugetlbfs, GID=42477
Nov 23 20:32:56 compute-1 sudo[37794]: pam_unix(sudo:session): session closed for user root
Nov 23 20:32:56 compute-1 sudo[37954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuamaqrglyxwoactdtkysoummtnlhbhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929976.3895547-943-122099944711081/AnsiballZ_file.py'
Nov 23 20:32:56 compute-1 sudo[37954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:32:56 compute-1 python3.9[37956]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 23 20:32:56 compute-1 sudo[37954]: pam_unix(sudo:session): session closed for user root
Nov 23 20:32:57 compute-1 sshd-session[37826]: Invalid user master from 118.145.189.160 port 52974
Nov 23 20:32:57 compute-1 sshd-session[37826]: Received disconnect from 118.145.189.160 port 52974:11: Bye Bye [preauth]
Nov 23 20:32:57 compute-1 sshd-session[37826]: Disconnected from invalid user master 118.145.189.160 port 52974 [preauth]
Nov 23 20:32:57 compute-1 sudo[38106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubzxawkgszskutuzfdxhueddycricxfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929977.5519025-976-230544281665897/AnsiballZ_dnf.py'
Nov 23 20:32:57 compute-1 sudo[38106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:32:58 compute-1 python3.9[38108]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 20:32:59 compute-1 sudo[38106]: pam_unix(sudo:session): session closed for user root
Nov 23 20:33:00 compute-1 sudo[38259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouazrqpkdfluibruhfbuysuoeipkaazm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929980.0321531-1000-222216670453745/AnsiballZ_file.py'
Nov 23 20:33:00 compute-1 sudo[38259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:33:00 compute-1 python3.9[38261]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:33:00 compute-1 sudo[38259]: pam_unix(sudo:session): session closed for user root
Nov 23 20:33:01 compute-1 sudo[38411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iefjdogcmlpvhllcwnckvxsiwtkujpjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929980.8500473-1024-104727972828753/AnsiballZ_stat.py'
Nov 23 20:33:01 compute-1 sudo[38411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:33:01 compute-1 python3.9[38413]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:33:01 compute-1 sudo[38411]: pam_unix(sudo:session): session closed for user root
Nov 23 20:33:01 compute-1 sudo[38534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rphmqyutzywxmhhmnltrgvacthasvyvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929980.8500473-1024-104727972828753/AnsiballZ_copy.py'
Nov 23 20:33:01 compute-1 sudo[38534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:33:01 compute-1 python3.9[38536]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763929980.8500473-1024-104727972828753/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:33:01 compute-1 sudo[38534]: pam_unix(sudo:session): session closed for user root
Nov 23 20:33:02 compute-1 sudo[38686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwkkqpmtoavpgopipgacwbvkqhlhbady ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929982.3077273-1069-96222736408609/AnsiballZ_systemd.py'
Nov 23 20:33:02 compute-1 sudo[38686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:33:03 compute-1 python3.9[38688]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 20:33:03 compute-1 systemd[1]: Starting Load Kernel Modules...
Nov 23 20:33:03 compute-1 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 23 20:33:03 compute-1 kernel: Bridge firewalling registered
Nov 23 20:33:03 compute-1 systemd-modules-load[38692]: Inserted module 'br_netfilter'
Nov 23 20:33:03 compute-1 systemd[1]: Finished Load Kernel Modules.
Nov 23 20:33:03 compute-1 sudo[38686]: pam_unix(sudo:session): session closed for user root
Nov 23 20:33:03 compute-1 sudo[38845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzorvlvwlottatolkrcscmsmfkplqypn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929983.6371734-1094-63561188763700/AnsiballZ_stat.py'
Nov 23 20:33:03 compute-1 sudo[38845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:33:04 compute-1 python3.9[38847]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:33:04 compute-1 sudo[38845]: pam_unix(sudo:session): session closed for user root
Nov 23 20:33:04 compute-1 sudo[38968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfairtdeplhfdujdfvovorwcjienpoxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929983.6371734-1094-63561188763700/AnsiballZ_copy.py'
Nov 23 20:33:04 compute-1 sudo[38968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:33:04 compute-1 python3.9[38970]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763929983.6371734-1094-63561188763700/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:33:04 compute-1 sudo[38968]: pam_unix(sudo:session): session closed for user root
Nov 23 20:33:05 compute-1 sudo[39120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emoruwehbmnygcydscpicemvbxiubrnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929985.5086644-1147-267330602540533/AnsiballZ_dnf.py'
Nov 23 20:33:05 compute-1 sudo[39120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:33:06 compute-1 python3.9[39122]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 20:33:09 compute-1 dbus-broker-launch[755]: Noticed file-system modification, trigger reload.
Nov 23 20:33:09 compute-1 dbus-broker-launch[755]: Noticed file-system modification, trigger reload.
Nov 23 20:33:09 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 20:33:09 compute-1 systemd[1]: Starting man-db-cache-update.service...
Nov 23 20:33:09 compute-1 systemd[1]: Reloading.
Nov 23 20:33:09 compute-1 systemd-rc-local-generator[39183]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:33:09 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 20:33:10 compute-1 sudo[39120]: pam_unix(sudo:session): session closed for user root
Nov 23 20:33:11 compute-1 python3.9[41346]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:33:12 compute-1 python3.9[42420]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 23 20:33:13 compute-1 python3.9[43129]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:33:13 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 20:33:13 compute-1 systemd[1]: Finished man-db-cache-update.service.
Nov 23 20:33:13 compute-1 systemd[1]: man-db-cache-update.service: Consumed 4.593s CPU time.
Nov 23 20:33:13 compute-1 systemd[1]: run-r4665eb4fbeb246e3bb403347632edc8e.service: Deactivated successfully.
Nov 23 20:33:14 compute-1 sudo[43281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkborhmumxuanvizafbqyfiluzzfgfqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929993.945719-1264-112457007341215/AnsiballZ_command.py'
Nov 23 20:33:14 compute-1 sudo[43281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:33:14 compute-1 python3.9[43283]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:33:14 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 23 20:33:15 compute-1 systemd[1]: Starting Authorization Manager...
Nov 23 20:33:15 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 23 20:33:15 compute-1 polkitd[43500]: Started polkitd version 0.117
Nov 23 20:33:15 compute-1 polkitd[43500]: Loading rules from directory /etc/polkit-1/rules.d
Nov 23 20:33:15 compute-1 polkitd[43500]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 23 20:33:15 compute-1 polkitd[43500]: Finished loading, compiling and executing 2 rules
Nov 23 20:33:15 compute-1 polkitd[43500]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Nov 23 20:33:15 compute-1 systemd[1]: Started Authorization Manager.
Nov 23 20:33:15 compute-1 sudo[43281]: pam_unix(sudo:session): session closed for user root
Nov 23 20:33:16 compute-1 sudo[43668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tezzeodnjcwaoiaikfgbszlwthiozucu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763929995.816125-1291-77582347673400/AnsiballZ_systemd.py'
Nov 23 20:33:16 compute-1 sudo[43668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:33:16 compute-1 python3.9[43670]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:33:16 compute-1 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 23 20:33:16 compute-1 systemd[1]: tuned.service: Deactivated successfully.
Nov 23 20:33:16 compute-1 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 23 20:33:16 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 23 20:33:16 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 23 20:33:16 compute-1 sudo[43668]: pam_unix(sudo:session): session closed for user root
Nov 23 20:33:17 compute-1 python3.9[43834]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 23 20:33:18 compute-1 sshd-session[43707]: Received disconnect from 43.225.142.116 port 50690:11: Bye Bye [preauth]
Nov 23 20:33:18 compute-1 sshd-session[43707]: Disconnected from authenticating user root 43.225.142.116 port 50690 [preauth]
Nov 23 20:33:21 compute-1 sudo[43984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnvmyxqbxyndgexqikztdvgdnqnemjrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930000.9055064-1462-23112188079259/AnsiballZ_systemd.py'
Nov 23 20:33:21 compute-1 sudo[43984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:33:21 compute-1 python3.9[43986]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:33:21 compute-1 systemd[1]: Reloading.
Nov 23 20:33:21 compute-1 systemd-rc-local-generator[44015]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:33:21 compute-1 sudo[43984]: pam_unix(sudo:session): session closed for user root
Nov 23 20:33:22 compute-1 sudo[44173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhsctcfxeebqcmfrlbgxvmzpefbpseaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930001.8181386-1462-134615236132100/AnsiballZ_systemd.py'
Nov 23 20:33:22 compute-1 sudo[44173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:33:22 compute-1 python3.9[44175]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:33:22 compute-1 systemd[1]: Reloading.
Nov 23 20:33:22 compute-1 systemd-rc-local-generator[44204]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:33:22 compute-1 sudo[44173]: pam_unix(sudo:session): session closed for user root
Nov 23 20:33:23 compute-1 sudo[44362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dijdwmjpzfqedcksfzsconqkmqfhyker ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930003.6543398-1510-281002319285633/AnsiballZ_command.py'
Nov 23 20:33:23 compute-1 sudo[44362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:33:24 compute-1 python3.9[44364]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:33:24 compute-1 sudo[44362]: pam_unix(sudo:session): session closed for user root
Nov 23 20:33:24 compute-1 sudo[44515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwtcyghnordamekesmathjbfgwgtkhxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930004.5457568-1534-79039401512818/AnsiballZ_command.py'
Nov 23 20:33:24 compute-1 sudo[44515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:33:25 compute-1 python3.9[44517]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:33:25 compute-1 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 23 20:33:25 compute-1 sudo[44515]: pam_unix(sudo:session): session closed for user root
Nov 23 20:33:25 compute-1 sudo[44668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fptipiicqhlpkxgqauknbfafdsjwsvqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930005.4343283-1558-179241274261922/AnsiballZ_command.py'
Nov 23 20:33:25 compute-1 sudo[44668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:33:25 compute-1 python3.9[44670]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:33:27 compute-1 sudo[44668]: pam_unix(sudo:session): session closed for user root
Nov 23 20:33:28 compute-1 sudo[44830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkqskuihfxwagfcchsscbekgnkvcjqxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930007.8097193-1582-146968858051848/AnsiballZ_command.py'
Nov 23 20:33:28 compute-1 sudo[44830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:33:28 compute-1 python3.9[44832]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:33:28 compute-1 sudo[44830]: pam_unix(sudo:session): session closed for user root
Nov 23 20:33:29 compute-1 sudo[44983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmnwqydobbxlbvwwszogsapyjorsilhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930008.6734607-1606-225598225290892/AnsiballZ_systemd.py'
Nov 23 20:33:29 compute-1 sudo[44983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:33:29 compute-1 python3.9[44985]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 20:33:29 compute-1 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 23 20:33:29 compute-1 systemd[1]: Stopped Apply Kernel Variables.
Nov 23 20:33:29 compute-1 systemd[1]: Stopping Apply Kernel Variables...
Nov 23 20:33:29 compute-1 systemd[1]: Starting Apply Kernel Variables...
Nov 23 20:33:29 compute-1 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 23 20:33:29 compute-1 systemd[1]: Finished Apply Kernel Variables.
Nov 23 20:33:29 compute-1 sudo[44983]: pam_unix(sudo:session): session closed for user root
Nov 23 20:33:30 compute-1 sshd-session[31406]: Connection closed by 192.168.122.30 port 47650
Nov 23 20:33:30 compute-1 sshd-session[31403]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:33:30 compute-1 systemd[1]: session-10.scope: Deactivated successfully.
Nov 23 20:33:30 compute-1 systemd[1]: session-10.scope: Consumed 2min 12.797s CPU time.
Nov 23 20:33:30 compute-1 systemd-logind[793]: Session 10 logged out. Waiting for processes to exit.
Nov 23 20:33:30 compute-1 systemd-logind[793]: Removed session 10.
Nov 23 20:33:32 compute-1 sshd-session[45016]: Connection closed by authenticating user root 185.156.73.233 port 32746 [preauth]
Nov 23 20:33:35 compute-1 sshd-session[45018]: Accepted publickey for zuul from 192.168.122.30 port 57620 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 20:33:35 compute-1 systemd-logind[793]: New session 11 of user zuul.
Nov 23 20:33:35 compute-1 systemd[1]: Started Session 11 of User zuul.
Nov 23 20:33:35 compute-1 sshd-session[45018]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:33:37 compute-1 python3.9[45171]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:33:38 compute-1 sudo[45325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhoofovyjlcqjnzvsntvdrezwjycxddn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930018.3059888-69-87753244749382/AnsiballZ_getent.py'
Nov 23 20:33:38 compute-1 sudo[45325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:33:38 compute-1 python3.9[45327]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 23 20:33:38 compute-1 sudo[45325]: pam_unix(sudo:session): session closed for user root
Nov 23 20:33:39 compute-1 sudo[45478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azyjeoacjlgqvtncihmtpkcxirgldyfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930019.2073007-93-122806272101255/AnsiballZ_group.py'
Nov 23 20:33:39 compute-1 sudo[45478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:33:39 compute-1 python3.9[45480]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 23 20:33:39 compute-1 groupadd[45481]: group added to /etc/group: name=openvswitch, GID=42476
Nov 23 20:33:40 compute-1 groupadd[45481]: group added to /etc/gshadow: name=openvswitch
Nov 23 20:33:40 compute-1 groupadd[45481]: new group: name=openvswitch, GID=42476
Nov 23 20:33:40 compute-1 sudo[45478]: pam_unix(sudo:session): session closed for user root
Nov 23 20:33:40 compute-1 sudo[45636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvgemoywythgoehtbfrskceltebmfaqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930020.4392548-117-41197650773652/AnsiballZ_user.py'
Nov 23 20:33:40 compute-1 sudo[45636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:33:41 compute-1 python3.9[45638]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 23 20:33:41 compute-1 useradd[45640]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Nov 23 20:33:41 compute-1 useradd[45640]: add 'openvswitch' to group 'hugetlbfs'
Nov 23 20:33:41 compute-1 useradd[45640]: add 'openvswitch' to shadow group 'hugetlbfs'
Nov 23 20:33:41 compute-1 sudo[45636]: pam_unix(sudo:session): session closed for user root
Nov 23 20:33:42 compute-1 sudo[45796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unefbewvjthsheepltadqljcshqwuggr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930022.0814645-147-244964499568840/AnsiballZ_setup.py'
Nov 23 20:33:42 compute-1 sudo[45796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:33:42 compute-1 python3.9[45798]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 20:33:42 compute-1 sudo[45796]: pam_unix(sudo:session): session closed for user root
Nov 23 20:33:43 compute-1 sudo[45880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbvjxccueeieyrbxwwdznxanfqstuodl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930022.0814645-147-244964499568840/AnsiballZ_dnf.py'
Nov 23 20:33:43 compute-1 sudo[45880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:33:43 compute-1 python3.9[45882]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 23 20:33:45 compute-1 sudo[45880]: pam_unix(sudo:session): session closed for user root
Nov 23 20:33:46 compute-1 sudo[46043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lupzdaczsksjwnmtdjjngmrwnsyumqpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930026.1838086-189-73960767127901/AnsiballZ_dnf.py'
Nov 23 20:33:46 compute-1 sudo[46043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:33:46 compute-1 python3.9[46045]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 20:33:58 compute-1 sshd-session[46064]: Received disconnect from 34.91.0.68 port 54574:11: Bye Bye [preauth]
Nov 23 20:33:58 compute-1 sshd-session[46064]: Disconnected from authenticating user root 34.91.0.68 port 54574 [preauth]
Nov 23 20:33:58 compute-1 kernel: SELinux:  Converting 2730 SID table entries...
Nov 23 20:33:58 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 20:33:58 compute-1 kernel: SELinux:  policy capability open_perms=1
Nov 23 20:33:58 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 20:33:58 compute-1 kernel: SELinux:  policy capability always_check_network=0
Nov 23 20:33:58 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 20:33:58 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 20:33:58 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 20:33:58 compute-1 groupadd[46072]: group added to /etc/group: name=unbound, GID=993
Nov 23 20:33:58 compute-1 groupadd[46072]: group added to /etc/gshadow: name=unbound
Nov 23 20:33:58 compute-1 groupadd[46072]: new group: name=unbound, GID=993
Nov 23 20:33:58 compute-1 useradd[46079]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Nov 23 20:33:58 compute-1 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 23 20:33:58 compute-1 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 23 20:33:59 compute-1 sshd-session[46066]: Invalid user user2 from 102.176.81.29 port 56086
Nov 23 20:33:59 compute-1 sshd-session[46066]: Received disconnect from 102.176.81.29 port 56086:11: Bye Bye [preauth]
Nov 23 20:33:59 compute-1 sshd-session[46066]: Disconnected from invalid user user2 102.176.81.29 port 56086 [preauth]
Nov 23 20:34:00 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 20:34:00 compute-1 systemd[1]: Starting man-db-cache-update.service...
Nov 23 20:34:01 compute-1 systemd[1]: Reloading.
Nov 23 20:34:01 compute-1 systemd-rc-local-generator[46578]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:34:01 compute-1 systemd-sysv-generator[46581]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:34:01 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 20:34:02 compute-1 sudo[46043]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:02 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 20:34:02 compute-1 systemd[1]: Finished man-db-cache-update.service.
Nov 23 20:34:02 compute-1 systemd[1]: run-r8caf589173554f3fb3dc848d06fd1a03.service: Deactivated successfully.
Nov 23 20:34:03 compute-1 sudo[47145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiuyfpwnkhxprmavuxjbbmrssakhrmlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930042.5647664-213-126563511636105/AnsiballZ_systemd.py'
Nov 23 20:34:03 compute-1 sudo[47145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:03 compute-1 python3.9[47147]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 20:34:03 compute-1 systemd[1]: Reloading.
Nov 23 20:34:03 compute-1 systemd-rc-local-generator[47172]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:34:03 compute-1 systemd-sysv-generator[47180]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:34:03 compute-1 systemd[1]: Starting Open vSwitch Database Unit...
Nov 23 20:34:03 compute-1 chown[47189]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 23 20:34:03 compute-1 ovs-ctl[47194]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 23 20:34:03 compute-1 ovs-ctl[47194]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 23 20:34:03 compute-1 ovs-ctl[47194]: Starting ovsdb-server [  OK  ]
Nov 23 20:34:03 compute-1 ovs-vsctl[47243]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 23 20:34:04 compute-1 ovs-vsctl[47260]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"d8ff4ac4-2bee-48db-b79e-2466bc4db046\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 23 20:34:04 compute-1 ovs-ctl[47194]: Configuring Open vSwitch system IDs [  OK  ]
Nov 23 20:34:04 compute-1 ovs-vsctl[47269]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Nov 23 20:34:04 compute-1 ovs-ctl[47194]: Enabling remote OVSDB managers [  OK  ]
Nov 23 20:34:04 compute-1 systemd[1]: Started Open vSwitch Database Unit.
Nov 23 20:34:04 compute-1 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 23 20:34:04 compute-1 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 23 20:34:04 compute-1 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 23 20:34:04 compute-1 kernel: openvswitch: Open vSwitch switching datapath
Nov 23 20:34:04 compute-1 ovs-ctl[47313]: Inserting openvswitch module [  OK  ]
Nov 23 20:34:04 compute-1 ovs-ctl[47282]: Starting ovs-vswitchd [  OK  ]
Nov 23 20:34:04 compute-1 ovs-vsctl[47331]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Nov 23 20:34:04 compute-1 ovs-ctl[47282]: Enabling remote OVSDB managers [  OK  ]
Nov 23 20:34:04 compute-1 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 23 20:34:04 compute-1 systemd[1]: Starting Open vSwitch...
Nov 23 20:34:04 compute-1 systemd[1]: Finished Open vSwitch.
Nov 23 20:34:04 compute-1 sudo[47145]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:06 compute-1 python3.9[47483]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:34:08 compute-1 sudo[47633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fodgbrdjrxunbunxwcpgyxtgaspyrhkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930047.9424372-267-258840715417181/AnsiballZ_sefcontext.py'
Nov 23 20:34:08 compute-1 sudo[47633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:08 compute-1 python3.9[47635]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 23 20:34:10 compute-1 kernel: SELinux:  Converting 2744 SID table entries...
Nov 23 20:34:10 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 20:34:10 compute-1 kernel: SELinux:  policy capability open_perms=1
Nov 23 20:34:10 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 20:34:10 compute-1 kernel: SELinux:  policy capability always_check_network=0
Nov 23 20:34:10 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 20:34:10 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 20:34:10 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 20:34:10 compute-1 sudo[47633]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:11 compute-1 python3.9[47790]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:34:12 compute-1 sudo[47946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdziicvthqhpgebavwkxkbpxlmbfqhrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930051.9259727-321-154877262304806/AnsiballZ_dnf.py'
Nov 23 20:34:12 compute-1 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 23 20:34:12 compute-1 sudo[47946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:12 compute-1 python3.9[47948]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 20:34:13 compute-1 sudo[47946]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:14 compute-1 sudo[48099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkuzrlbwydubgqhonxdthyqmtqttlvzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930053.9652505-345-202694937086688/AnsiballZ_command.py'
Nov 23 20:34:14 compute-1 sudo[48099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:14 compute-1 python3.9[48101]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:34:15 compute-1 sudo[48099]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:16 compute-1 sudo[48386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bypeapclthluqgyoapatlzyvqkcsizhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930055.7181125-369-60216173126585/AnsiballZ_file.py'
Nov 23 20:34:16 compute-1 sudo[48386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:16 compute-1 python3.9[48388]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 20:34:16 compute-1 sudo[48386]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:16 compute-1 sshd-session[48389]: Invalid user solv from 161.35.179.103 port 56812
Nov 23 20:34:16 compute-1 sshd-session[48389]: Connection closed by invalid user solv 161.35.179.103 port 56812 [preauth]
Nov 23 20:34:17 compute-1 python3.9[48540]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:34:17 compute-1 sudo[48692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sedhcpkfvdwzqxmpsvqtsoimtbdwondg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930057.4443462-417-259800597674868/AnsiballZ_dnf.py'
Nov 23 20:34:17 compute-1 sudo[48692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:18 compute-1 python3.9[48694]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 20:34:19 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 20:34:19 compute-1 systemd[1]: Starting man-db-cache-update.service...
Nov 23 20:34:19 compute-1 systemd[1]: Reloading.
Nov 23 20:34:19 compute-1 systemd-rc-local-generator[48729]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:34:19 compute-1 systemd-sysv-generator[48735]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:34:20 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 20:34:20 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 20:34:20 compute-1 systemd[1]: Finished man-db-cache-update.service.
Nov 23 20:34:20 compute-1 systemd[1]: run-r41905e1977c54505b91471cb03c3b8bf.service: Deactivated successfully.
Nov 23 20:34:20 compute-1 sudo[48692]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:21 compute-1 sudo[49010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiykmzzibjruiwpnnbdioitidgbshmkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930060.768056-441-110509399536624/AnsiballZ_systemd.py'
Nov 23 20:34:21 compute-1 sudo[49010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:21 compute-1 python3.9[49012]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 20:34:21 compute-1 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 23 20:34:21 compute-1 systemd[1]: Stopped Network Manager Wait Online.
Nov 23 20:34:21 compute-1 systemd[1]: Stopping Network Manager Wait Online...
Nov 23 20:34:21 compute-1 systemd[1]: Stopping Network Manager...
Nov 23 20:34:21 compute-1 NetworkManager[7191]: <info>  [1763930061.3874] caught SIGTERM, shutting down normally.
Nov 23 20:34:21 compute-1 NetworkManager[7191]: <info>  [1763930061.3893] dhcp4 (eth0): canceled DHCP transaction
Nov 23 20:34:21 compute-1 NetworkManager[7191]: <info>  [1763930061.3894] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 23 20:34:21 compute-1 NetworkManager[7191]: <info>  [1763930061.3894] dhcp4 (eth0): state changed no lease
Nov 23 20:34:21 compute-1 NetworkManager[7191]: <info>  [1763930061.3896] manager: NetworkManager state is now CONNECTED_SITE
Nov 23 20:34:21 compute-1 NetworkManager[7191]: <info>  [1763930061.3957] exiting (success)
Nov 23 20:34:21 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 23 20:34:21 compute-1 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 23 20:34:21 compute-1 systemd[1]: Stopped Network Manager.
Nov 23 20:34:21 compute-1 systemd[1]: NetworkManager.service: Consumed 12.012s CPU time, 4.0M memory peak, read 0B from disk, written 45.0K to disk.
Nov 23 20:34:21 compute-1 systemd[1]: Starting Network Manager...
Nov 23 20:34:21 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.4480] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:6edcf464-8554-408a-ba56-0bae3cf8aec4)
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.4481] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.4535] manager[0x555c7ef90090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 23 20:34:21 compute-1 systemd[1]: Starting Hostname Service...
Nov 23 20:34:21 compute-1 systemd[1]: Started Hostname Service.
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5395] hostname: hostname: using hostnamed
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5399] hostname: static hostname changed from (none) to "compute-1"
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5404] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5408] manager[0x555c7ef90090]: rfkill: Wi-Fi hardware radio set enabled
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5409] manager[0x555c7ef90090]: rfkill: WWAN hardware radio set enabled
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5427] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5435] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5435] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5436] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5436] manager: Networking is enabled by state file
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5438] settings: Loaded settings plugin: keyfile (internal)
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5441] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5461] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5468] dhcp: init: Using DHCP client 'internal'
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5470] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5475] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5479] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5485] device (lo): Activation: starting connection 'lo' (170402d3-84eb-4bc9-a75c-092c5ddf07e9)
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5491] device (eth0): carrier: link connected
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5495] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5500] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5500] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5506] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5512] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5517] device (eth1): carrier: link connected
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5521] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5525] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (c8f28de1-00ce-5ad5-b1e7-36e35b879f57) (indicated)
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5526] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5530] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5536] device (eth1): Activation: starting connection 'ci-private-network' (c8f28de1-00ce-5ad5-b1e7-36e35b879f57)
Nov 23 20:34:21 compute-1 systemd[1]: Started Network Manager.
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5541] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5548] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5550] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5552] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5553] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5555] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5557] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5560] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5563] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5569] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5572] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5593] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5605] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5614] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5617] dhcp4 (eth0): state changed new lease, address=38.102.83.106
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5620] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5625] device (lo): Activation: successful, device activated.
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5632] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 23 20:34:21 compute-1 systemd[1]: Starting Network Manager Wait Online...
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5697] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5704] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5705] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5711] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5714] device (eth1): Activation: successful, device activated.
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5732] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5733] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5736] manager: NetworkManager state is now CONNECTED_SITE
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5740] device (eth0): Activation: successful, device activated.
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5745] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 23 20:34:21 compute-1 NetworkManager[49021]: <info>  [1763930061.5747] manager: startup complete
Nov 23 20:34:21 compute-1 sudo[49010]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:21 compute-1 systemd[1]: Finished Network Manager Wait Online.
Nov 23 20:34:22 compute-1 sudo[49239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okhynlimjpdjotqurxbjmaoimqtklsfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930061.9950948-465-185180920707201/AnsiballZ_dnf.py'
Nov 23 20:34:22 compute-1 sudo[49239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:22 compute-1 python3.9[49241]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 20:34:23 compute-1 sshd-session[49135]: Received disconnect from 43.225.142.116 port 46818:11: Bye Bye [preauth]
Nov 23 20:34:23 compute-1 sshd-session[49135]: Disconnected from authenticating user root 43.225.142.116 port 46818 [preauth]
Nov 23 20:34:26 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 20:34:26 compute-1 systemd[1]: Starting man-db-cache-update.service...
Nov 23 20:34:27 compute-1 systemd[1]: Reloading.
Nov 23 20:34:27 compute-1 systemd-rc-local-generator[49291]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:34:27 compute-1 systemd-sysv-generator[49296]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:34:27 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 20:34:28 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 20:34:28 compute-1 systemd[1]: Finished man-db-cache-update.service.
Nov 23 20:34:28 compute-1 systemd[1]: run-r5c1f9044aa364421af8cd5482479367b.service: Deactivated successfully.
Nov 23 20:34:28 compute-1 sudo[49239]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:29 compute-1 sudo[49700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-werjnvafjavrezbahmmduktolartarna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930069.3194056-501-250164657781340/AnsiballZ_stat.py'
Nov 23 20:34:29 compute-1 sudo[49700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:29 compute-1 python3.9[49702]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:34:29 compute-1 sudo[49700]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:30 compute-1 sudo[49852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoxhiemtkqkxhfadnrmahhqufajondmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930070.2220688-528-52832062712725/AnsiballZ_ini_file.py'
Nov 23 20:34:30 compute-1 sudo[49852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:30 compute-1 python3.9[49854]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:34:30 compute-1 sudo[49852]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:31 compute-1 sudo[50006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiehfxdosngjefqgsxabdbnyhemiqvyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930071.319294-558-213431410294486/AnsiballZ_ini_file.py'
Nov 23 20:34:31 compute-1 sudo[50006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:31 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 23 20:34:31 compute-1 python3.9[50008]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:34:31 compute-1 sudo[50006]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:32 compute-1 sudo[50158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvdjicbcyltwffpefrzcczibvlitllqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930072.068991-558-40867860499986/AnsiballZ_ini_file.py'
Nov 23 20:34:32 compute-1 sudo[50158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:32 compute-1 python3.9[50160]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:34:32 compute-1 sudo[50158]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:33 compute-1 sudo[50310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxcdgrdmhgbvnlwpnhrndgxcwzrlflqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930072.8984668-603-144249718237054/AnsiballZ_ini_file.py'
Nov 23 20:34:33 compute-1 sudo[50310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:33 compute-1 python3.9[50312]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:34:33 compute-1 sudo[50310]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:33 compute-1 sudo[50462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evyvmyrtglgaqutaqmtozrauieixctzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930073.5357375-603-456134476338/AnsiballZ_ini_file.py'
Nov 23 20:34:33 compute-1 sudo[50462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:34 compute-1 python3.9[50464]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:34:34 compute-1 sudo[50462]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:35 compute-1 sudo[50614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnfcxjlzgumylxwlhnfxjsrqdbqdfsfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930074.5595515-648-259711101036548/AnsiballZ_stat.py'
Nov 23 20:34:35 compute-1 sudo[50614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:35 compute-1 python3.9[50616]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:34:35 compute-1 sudo[50614]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:35 compute-1 sudo[50737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwnddtnhqyzzkfxotoaizcbbravvrido ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930074.5595515-648-259711101036548/AnsiballZ_copy.py'
Nov 23 20:34:35 compute-1 sudo[50737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:35 compute-1 python3.9[50739]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930074.5595515-648-259711101036548/.source _original_basename=.0daqlenn follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:34:35 compute-1 sudo[50737]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:36 compute-1 sshd-session[50740]: Invalid user solana from 92.118.39.92 port 36586
Nov 23 20:34:36 compute-1 sshd-session[50740]: Connection closed by invalid user solana 92.118.39.92 port 36586 [preauth]
Nov 23 20:34:36 compute-1 sudo[50891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxdsofrseywetpolqjygbcycauiuvxyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930076.0834675-693-146667443037716/AnsiballZ_file.py'
Nov 23 20:34:36 compute-1 sudo[50891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:36 compute-1 python3.9[50893]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:34:36 compute-1 sudo[50891]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:37 compute-1 sudo[51043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krwhvqwmnkwrgbemwoayncfdvgsbcmph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930077.11024-717-161048811347427/AnsiballZ_edpm_os_net_config_mappings.py'
Nov 23 20:34:37 compute-1 sudo[51043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:37 compute-1 python3.9[51045]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 23 20:34:37 compute-1 sudo[51043]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:38 compute-1 sudo[51195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axaozajkwbxvjhgazcdtssjpmyxcxxdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930078.230129-744-190424993451661/AnsiballZ_file.py'
Nov 23 20:34:38 compute-1 sudo[51195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:38 compute-1 python3.9[51197]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:34:38 compute-1 sudo[51195]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:39 compute-1 sudo[51347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqxwvpswniiyzxyyhzbkmmewmpbjsddv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930079.1259732-774-276939119894781/AnsiballZ_stat.py'
Nov 23 20:34:39 compute-1 sudo[51347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:39 compute-1 sudo[51347]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:39 compute-1 sudo[51470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eewmfofhfyohxcpqitcgjfejgmgdvtya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930079.1259732-774-276939119894781/AnsiballZ_copy.py'
Nov 23 20:34:39 compute-1 sudo[51470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:40 compute-1 sudo[51470]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:40 compute-1 sudo[51622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpmpxeqalmnepejqstbpdfntqpadvetg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930080.607186-819-179965191790500/AnsiballZ_slurp.py'
Nov 23 20:34:40 compute-1 sudo[51622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:41 compute-1 python3.9[51624]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 23 20:34:41 compute-1 sudo[51622]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:42 compute-1 sudo[51797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eklysfuxwixkjynpycydvhgergvhjxsd ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930081.4838538-846-47931802382160/async_wrapper.py j595449502514 300 /home/zuul/.ansible/tmp/ansible-tmp-1763930081.4838538-846-47931802382160/AnsiballZ_edpm_os_net_config.py _'
Nov 23 20:34:42 compute-1 sudo[51797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:42 compute-1 ansible-async_wrapper.py[51799]: Invoked with j595449502514 300 /home/zuul/.ansible/tmp/ansible-tmp-1763930081.4838538-846-47931802382160/AnsiballZ_edpm_os_net_config.py _
Nov 23 20:34:42 compute-1 ansible-async_wrapper.py[51802]: Starting module and watcher
Nov 23 20:34:42 compute-1 ansible-async_wrapper.py[51802]: Start watching 51803 (300)
Nov 23 20:34:42 compute-1 ansible-async_wrapper.py[51803]: Start module (51803)
Nov 23 20:34:42 compute-1 ansible-async_wrapper.py[51799]: Return async_wrapper task started.
Nov 23 20:34:42 compute-1 sudo[51797]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:42 compute-1 python3.9[51804]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 23 20:34:43 compute-1 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 23 20:34:43 compute-1 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 23 20:34:43 compute-1 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 23 20:34:43 compute-1 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 23 20:34:43 compute-1 kernel: cfg80211: failed to load regulatory.db
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5098] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51805 uid=0 result="success"
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5117] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51805 uid=0 result="success"
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5649] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5651] audit: op="connection-add" uuid="c988ce38-27c2-4d3c-85ec-06b32df62858" name="br-ex-br" pid=51805 uid=0 result="success"
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5667] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5668] audit: op="connection-add" uuid="8cc83dc2-9293-41a4-b95c-e9edf11e20ca" name="br-ex-port" pid=51805 uid=0 result="success"
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5680] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5682] audit: op="connection-add" uuid="f5c2ae14-da24-43ef-9318-3cf2c229c249" name="eth1-port" pid=51805 uid=0 result="success"
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5695] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5697] audit: op="connection-add" uuid="bf5464d3-55b8-4fc0-8fcf-6b47d314fb69" name="vlan20-port" pid=51805 uid=0 result="success"
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5710] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5711] audit: op="connection-add" uuid="4965df62-2909-48eb-a3a9-d8bd87b0cd5f" name="vlan21-port" pid=51805 uid=0 result="success"
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5723] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5725] audit: op="connection-add" uuid="cc36bd65-7fc5-43b8-9ec5-4e4b17c40a8f" name="vlan22-port" pid=51805 uid=0 result="success"
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5737] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5739] audit: op="connection-add" uuid="8486841e-2ac8-4483-9797-ca325237ecfa" name="vlan23-port" pid=51805 uid=0 result="success"
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5760] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout,802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51805 uid=0 result="success"
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5777] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5779] audit: op="connection-add" uuid="a72cb7f2-1249-4a6e-ba90-aca60299fcae" name="br-ex-if" pid=51805 uid=0 result="success"
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5815] audit: op="connection-update" uuid="c8f28de1-00ce-5ad5-b1e7-36e35b879f57" name="ci-private-network" args="ipv6.addresses,ipv6.dns,ipv6.routing-rules,ipv6.method,ipv6.addr-gen-mode,ipv6.routes,ovs-external-ids.data,connection.controller,connection.slave-type,connection.master,connection.port-type,connection.timestamp,ipv4.addresses,ipv4.dns,ipv4.routing-rules,ipv4.method,ipv4.never-default,ipv4.routes,ovs-interface.type" pid=51805 uid=0 result="success"
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5834] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5836] audit: op="connection-add" uuid="fe8b5918-fff8-49b1-94f9-0962986fea5d" name="vlan20-if" pid=51805 uid=0 result="success"
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5853] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5855] audit: op="connection-add" uuid="89237f34-1baf-4498-87f3-65d56f6285d0" name="vlan21-if" pid=51805 uid=0 result="success"
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5872] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5874] audit: op="connection-add" uuid="680d22b5-afb2-440f-8eb3-4c40fd54294d" name="vlan22-if" pid=51805 uid=0 result="success"
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5891] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5892] audit: op="connection-add" uuid="2c19b141-69af-44dc-9ab3-ce6717d58c14" name="vlan23-if" pid=51805 uid=0 result="success"
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5905] audit: op="connection-delete" uuid="b8d72197-27ea-3e22-9d94-94c7806ccb0f" name="Wired connection 1" pid=51805 uid=0 result="success"
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5917] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5928] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5932] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (c988ce38-27c2-4d3c-85ec-06b32df62858)
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5932] audit: op="connection-activate" uuid="c988ce38-27c2-4d3c-85ec-06b32df62858" name="br-ex-br" pid=51805 uid=0 result="success"
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5934] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5941] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5945] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (8cc83dc2-9293-41a4-b95c-e9edf11e20ca)
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5947] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5953] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5957] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (f5c2ae14-da24-43ef-9318-3cf2c229c249)
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5959] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.5965] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6010] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (bf5464d3-55b8-4fc0-8fcf-6b47d314fb69)
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6012] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6021] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6024] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (4965df62-2909-48eb-a3a9-d8bd87b0cd5f)
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6026] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6032] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6035] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (cc36bd65-7fc5-43b8-9ec5-4e4b17c40a8f)
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6037] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6043] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6048] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (8486841e-2ac8-4483-9797-ca325237ecfa)
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6049] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6051] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6052] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6059] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6063] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6067] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (a72cb7f2-1249-4a6e-ba90-aca60299fcae)
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6068] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6071] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6072] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6073] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6074] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6087] device (eth1): disconnecting for new activation request.
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6087] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6090] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6092] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6093] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6095] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6099] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6104] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (fe8b5918-fff8-49b1-94f9-0962986fea5d)
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6105] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6107] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6109] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6110] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6112] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6116] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6119] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (89237f34-1baf-4498-87f3-65d56f6285d0)
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6120] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6123] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6124] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6125] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6127] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6131] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6134] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (680d22b5-afb2-440f-8eb3-4c40fd54294d)
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6135] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6137] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6139] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6140] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6143] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6146] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6150] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (2c19b141-69af-44dc-9ab3-ce6717d58c14)
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6151] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6153] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6155] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6156] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6159] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6170] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.method,ipv6.addr-gen-mode,802-3-ethernet.mtu,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51805 uid=0 result="success"
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6172] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6174] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6175] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6181] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6185] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6188] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6191] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6192] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6196] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6199] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6202] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6204] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6207] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6211] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6215] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6217] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6223] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6227] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6231] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6233] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6238] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6242] dhcp4 (eth0): canceled DHCP transaction
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6242] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6242] dhcp4 (eth0): state changed no lease
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6243] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6263] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51805 uid=0 result="fail" reason="Device is not activated"
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6296] device (eth1): disconnecting for new activation request.
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6296] audit: op="connection-activate" uuid="c8f28de1-00ce-5ad5-b1e7-36e35b879f57" name="ci-private-network" pid=51805 uid=0 result="success"
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6304] dhcp4 (eth0): state changed new lease, address=38.102.83.106
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6747] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 23 20:34:44 compute-1 kernel: ovs-system: entered promiscuous mode
Nov 23 20:34:44 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6758] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6765] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6773] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 23 20:34:44 compute-1 kernel: Timeout policy base is empty
Nov 23 20:34:44 compute-1 systemd-udevd[51809]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6790] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6792] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51805 uid=0 result="success"
Nov 23 20:34:44 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6863] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6938] device (eth1): Activation: starting connection 'ci-private-network' (c8f28de1-00ce-5ad5-b1e7-36e35b879f57)
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6952] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6955] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6961] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6963] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6963] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6965] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6967] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6969] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6970] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6981] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6992] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.6998] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7003] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7010] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7014] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7022] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7026] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7033] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7038] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7043] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7046] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7051] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7056] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7062] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7068] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7074] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7111] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7114] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7122] device (eth1): Activation: successful, device activated.
Nov 23 20:34:44 compute-1 kernel: br-ex: entered promiscuous mode
Nov 23 20:34:44 compute-1 systemd-udevd[51811]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 20:34:44 compute-1 kernel: vlan22: entered promiscuous mode
Nov 23 20:34:44 compute-1 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7276] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7289] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7304] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7308] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7313] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 23 20:34:44 compute-1 kernel: vlan23: entered promiscuous mode
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7366] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7376] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 kernel: vlan20: entered promiscuous mode
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7394] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7396] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7401] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7444] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 23 20:34:44 compute-1 kernel: vlan21: entered promiscuous mode
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7458] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7487] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7488] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7490] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7497] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7510] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7543] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7544] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7550] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7568] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7579] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7610] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7611] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 20:34:44 compute-1 NetworkManager[49021]: <info>  [1763930084.7616] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 23 20:34:45 compute-1 NetworkManager[49021]: <info>  [1763930085.8800] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51805 uid=0 result="success"
Nov 23 20:34:46 compute-1 NetworkManager[49021]: <info>  [1763930086.0436] checkpoint[0x555c7ef66950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 23 20:34:46 compute-1 NetworkManager[49021]: <info>  [1763930086.0438] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51805 uid=0 result="success"
Nov 23 20:34:46 compute-1 sudo[52161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkvfzrkesadyvcqfrtytbobbybylrrdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930085.707511-846-229716478987567/AnsiballZ_async_status.py'
Nov 23 20:34:46 compute-1 sudo[52161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:46 compute-1 python3.9[52163]: ansible-ansible.legacy.async_status Invoked with jid=j595449502514.51799 mode=status _async_dir=/root/.ansible_async
Nov 23 20:34:46 compute-1 sudo[52161]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:46 compute-1 NetworkManager[49021]: <info>  [1763930086.3486] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51805 uid=0 result="success"
Nov 23 20:34:46 compute-1 NetworkManager[49021]: <info>  [1763930086.3499] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51805 uid=0 result="success"
Nov 23 20:34:46 compute-1 NetworkManager[49021]: <info>  [1763930086.5964] audit: op="networking-control" arg="global-dns-configuration" pid=51805 uid=0 result="success"
Nov 23 20:34:46 compute-1 NetworkManager[49021]: <info>  [1763930086.6005] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 23 20:34:46 compute-1 NetworkManager[49021]: <info>  [1763930086.6050] audit: op="networking-control" arg="global-dns-configuration" pid=51805 uid=0 result="success"
Nov 23 20:34:46 compute-1 NetworkManager[49021]: <info>  [1763930086.6079] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51805 uid=0 result="success"
Nov 23 20:34:46 compute-1 NetworkManager[49021]: <info>  [1763930086.7664] checkpoint[0x555c7ef66a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 23 20:34:46 compute-1 NetworkManager[49021]: <info>  [1763930086.7671] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51805 uid=0 result="success"
Nov 23 20:34:46 compute-1 ansible-async_wrapper.py[51803]: Module complete (51803)
Nov 23 20:34:47 compute-1 ansible-async_wrapper.py[51802]: Done in kid B.
Nov 23 20:34:49 compute-1 sudo[52266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axtqzzkcpytgttoynbnbnhxcqzzvxzev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930085.707511-846-229716478987567/AnsiballZ_async_status.py'
Nov 23 20:34:49 compute-1 sudo[52266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:49 compute-1 python3.9[52268]: ansible-ansible.legacy.async_status Invoked with jid=j595449502514.51799 mode=status _async_dir=/root/.ansible_async
Nov 23 20:34:49 compute-1 sudo[52266]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:50 compute-1 sudo[52366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjplnwmeiniykqdwzhatlbjqtosutzsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930085.707511-846-229716478987567/AnsiballZ_async_status.py'
Nov 23 20:34:50 compute-1 sudo[52366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:50 compute-1 python3.9[52368]: ansible-ansible.legacy.async_status Invoked with jid=j595449502514.51799 mode=cleanup _async_dir=/root/.ansible_async
Nov 23 20:34:50 compute-1 sudo[52366]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:51 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 23 20:34:51 compute-1 sudo[52520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwkwwizwrygfxekzjoaocufqllvdfuzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930091.3771622-927-246569029256638/AnsiballZ_stat.py'
Nov 23 20:34:51 compute-1 sudo[52520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:51 compute-1 python3.9[52522]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:34:51 compute-1 sudo[52520]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:52 compute-1 sudo[52643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgwcybgyectpycqblqxavnaimvldclju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930091.3771622-927-246569029256638/AnsiballZ_copy.py'
Nov 23 20:34:52 compute-1 sudo[52643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:52 compute-1 python3.9[52645]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930091.3771622-927-246569029256638/.source.returncode _original_basename=._9la4h4r follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:34:52 compute-1 sudo[52643]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:53 compute-1 sudo[52796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqgfiuqzgxobiiwxdvhenoxnqeljnkkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930092.914343-975-134779940725131/AnsiballZ_stat.py'
Nov 23 20:34:53 compute-1 sudo[52796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:53 compute-1 python3.9[52798]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:34:53 compute-1 sudo[52796]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:53 compute-1 sudo[52921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhfycjhkfouqajatsbzwaszdjpzsgaqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930092.914343-975-134779940725131/AnsiballZ_copy.py'
Nov 23 20:34:53 compute-1 sudo[52921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:53 compute-1 python3.9[52923]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930092.914343-975-134779940725131/.source.cfg _original_basename=.e015nhze follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:34:53 compute-1 sudo[52921]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:54 compute-1 sshd-session[52893]: Invalid user support from 78.128.112.74 port 43568
Nov 23 20:34:54 compute-1 sshd-session[52893]: Connection closed by invalid user support 78.128.112.74 port 43568 [preauth]
Nov 23 20:34:54 compute-1 sudo[53073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwdholckaajesjbuxknsfhmcqmgztwxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930094.3316586-1020-135582655150319/AnsiballZ_systemd.py'
Nov 23 20:34:54 compute-1 sudo[53073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:34:55 compute-1 python3.9[53075]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 20:34:55 compute-1 systemd[1]: Reloading Network Manager...
Nov 23 20:34:55 compute-1 NetworkManager[49021]: <info>  [1763930095.3092] audit: op="reload" arg="0" pid=53079 uid=0 result="success"
Nov 23 20:34:55 compute-1 NetworkManager[49021]: <info>  [1763930095.3100] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 23 20:34:55 compute-1 systemd[1]: Reloaded Network Manager.
Nov 23 20:34:55 compute-1 sudo[53073]: pam_unix(sudo:session): session closed for user root
Nov 23 20:34:55 compute-1 sshd-session[45021]: Connection closed by 192.168.122.30 port 57620
Nov 23 20:34:55 compute-1 sshd-session[45018]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:34:55 compute-1 systemd-logind[793]: Session 11 logged out. Waiting for processes to exit.
Nov 23 20:34:55 compute-1 systemd[1]: session-11.scope: Deactivated successfully.
Nov 23 20:34:55 compute-1 systemd[1]: session-11.scope: Consumed 48.229s CPU time.
Nov 23 20:34:55 compute-1 systemd-logind[793]: Removed session 11.
Nov 23 20:35:01 compute-1 sshd-session[53110]: Accepted publickey for zuul from 192.168.122.30 port 48146 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 20:35:01 compute-1 systemd-logind[793]: New session 12 of user zuul.
Nov 23 20:35:01 compute-1 systemd[1]: Started Session 12 of User zuul.
Nov 23 20:35:01 compute-1 sshd-session[53110]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:35:02 compute-1 python3.9[53263]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:35:03 compute-1 python3.9[53418]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 20:35:05 compute-1 python3.9[53611]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:35:05 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 23 20:35:05 compute-1 sshd-session[53113]: Connection closed by 192.168.122.30 port 48146
Nov 23 20:35:05 compute-1 sshd-session[53110]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:35:05 compute-1 systemd[1]: session-12.scope: Deactivated successfully.
Nov 23 20:35:05 compute-1 systemd[1]: session-12.scope: Consumed 2.280s CPU time.
Nov 23 20:35:05 compute-1 systemd-logind[793]: Session 12 logged out. Waiting for processes to exit.
Nov 23 20:35:05 compute-1 systemd-logind[793]: Removed session 12.
Nov 23 20:35:05 compute-1 sshd-session[53637]: Invalid user apache from 34.91.0.68 port 56552
Nov 23 20:35:05 compute-1 sshd-session[53637]: Received disconnect from 34.91.0.68 port 56552:11: Bye Bye [preauth]
Nov 23 20:35:05 compute-1 sshd-session[53637]: Disconnected from invalid user apache 34.91.0.68 port 56552 [preauth]
Nov 23 20:35:11 compute-1 sshd-session[53642]: Accepted publickey for zuul from 192.168.122.30 port 60482 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 20:35:11 compute-1 systemd-logind[793]: New session 13 of user zuul.
Nov 23 20:35:11 compute-1 systemd[1]: Started Session 13 of User zuul.
Nov 23 20:35:11 compute-1 sshd-session[53642]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:35:12 compute-1 python3.9[53795]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:35:13 compute-1 python3.9[53950]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:35:14 compute-1 sudo[54104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgbbukgwxlaijynasdvkgandktsfyhvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930113.9835021-81-172926080751730/AnsiballZ_setup.py'
Nov 23 20:35:14 compute-1 sudo[54104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:14 compute-1 python3.9[54106]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 20:35:14 compute-1 sudo[54104]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:15 compute-1 sudo[54188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aafdyparohitybakuwznwdtrqqhsqiyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930113.9835021-81-172926080751730/AnsiballZ_dnf.py'
Nov 23 20:35:15 compute-1 sudo[54188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:15 compute-1 python3.9[54190]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 20:35:16 compute-1 sudo[54188]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:17 compute-1 sudo[54342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gewgswrphefpyqijbuzbtzbryaykplvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930117.1267884-117-260377922875122/AnsiballZ_setup.py'
Nov 23 20:35:17 compute-1 sudo[54342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:17 compute-1 python3.9[54344]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 20:35:18 compute-1 sudo[54342]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:18 compute-1 sudo[54537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgizvunbflgatrwcrgidlfecfguvdvad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930118.4138038-150-103264360402158/AnsiballZ_file.py'
Nov 23 20:35:18 compute-1 sudo[54537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:19 compute-1 python3.9[54539]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:35:19 compute-1 sudo[54537]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:19 compute-1 sudo[54689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onhefnwyslecihppznxvbbrjgtrjljci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930119.332477-174-51714680960388/AnsiballZ_command.py'
Nov 23 20:35:19 compute-1 sudo[54689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:19 compute-1 python3.9[54691]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:35:19 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat2235841369-merged.mount: Deactivated successfully.
Nov 23 20:35:19 compute-1 podman[54692]: 2025-11-23 20:35:19.997418788 +0000 UTC m=+0.096153540 system refresh
Nov 23 20:35:20 compute-1 sudo[54689]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:20 compute-1 sudo[54852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmtbslvxfvcmlvadigccfeeqkxhbghly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930120.4056346-198-119984742398673/AnsiballZ_stat.py'
Nov 23 20:35:20 compute-1 sudo[54852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:20 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 20:35:21 compute-1 python3.9[54854]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:35:21 compute-1 sudo[54852]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:21 compute-1 sudo[54977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jteojlkofipfzdhghrgphlppdfefjvth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930120.4056346-198-119984742398673/AnsiballZ_copy.py'
Nov 23 20:35:21 compute-1 sudo[54977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:21 compute-1 python3.9[54979]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930120.4056346-198-119984742398673/.source.json follow=False _original_basename=podman_network_config.j2 checksum=ac1c67868e36c2960d9b69f46efe99c8dc349861 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:35:21 compute-1 sudo[54977]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:22 compute-1 sudo[55129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qohdqohrpurrdpkhnfpeepsodzlclkkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930122.0771065-243-154514973180175/AnsiballZ_stat.py'
Nov 23 20:35:22 compute-1 sudo[55129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:22 compute-1 sshd-session[54925]: Invalid user weblogic from 102.176.81.29 port 58600
Nov 23 20:35:22 compute-1 python3.9[55131]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:35:22 compute-1 sudo[55129]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:22 compute-1 sshd-session[54925]: Received disconnect from 102.176.81.29 port 58600:11: Bye Bye [preauth]
Nov 23 20:35:22 compute-1 sshd-session[54925]: Disconnected from invalid user weblogic 102.176.81.29 port 58600 [preauth]
Nov 23 20:35:22 compute-1 sudo[55252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsjyrxsoubfudcsrkpfgwcjlrplrosff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930122.0771065-243-154514973180175/AnsiballZ_copy.py'
Nov 23 20:35:22 compute-1 sudo[55252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:23 compute-1 python3.9[55254]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763930122.0771065-243-154514973180175/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:35:23 compute-1 sudo[55252]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:24 compute-1 sudo[55404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-repqythyowumcrpvrpcqznhfqzufpfxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930123.7582743-291-224445685189385/AnsiballZ_ini_file.py'
Nov 23 20:35:24 compute-1 sudo[55404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:24 compute-1 python3.9[55406]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:35:24 compute-1 sudo[55404]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:24 compute-1 sudo[55556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acjbvqeklvdjpvjennsfeiafltpxdrxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930124.5096257-291-254519511414337/AnsiballZ_ini_file.py'
Nov 23 20:35:24 compute-1 sudo[55556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:25 compute-1 python3.9[55558]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:35:25 compute-1 sudo[55556]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:25 compute-1 sudo[55708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hojuhyzudkemllqnazyrzqxepuhqltuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930125.2143714-291-111399606599896/AnsiballZ_ini_file.py'
Nov 23 20:35:25 compute-1 sudo[55708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:25 compute-1 python3.9[55710]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:35:25 compute-1 sudo[55708]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:26 compute-1 sudo[55862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfvkuccvkpiugsyslrhclxxlrqxzucml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930125.8870873-291-110741082808048/AnsiballZ_ini_file.py'
Nov 23 20:35:26 compute-1 sudo[55862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:26 compute-1 python3.9[55864]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:35:26 compute-1 sudo[55862]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:27 compute-1 sshd-session[55711]: Invalid user web from 43.225.142.116 port 42948
Nov 23 20:35:27 compute-1 sshd-session[55711]: Received disconnect from 43.225.142.116 port 42948:11: Bye Bye [preauth]
Nov 23 20:35:27 compute-1 sshd-session[55711]: Disconnected from invalid user web 43.225.142.116 port 42948 [preauth]
Nov 23 20:35:27 compute-1 sudo[56014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smnuhnuaunbodetwohrkiqqenevnusnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930127.2309527-384-77349351010198/AnsiballZ_dnf.py'
Nov 23 20:35:27 compute-1 sudo[56014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:27 compute-1 python3.9[56016]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 20:35:28 compute-1 sudo[56014]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:30 compute-1 sudo[56167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqcqsgwzumcziaxzuoorphizyhyehyuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930129.6921563-417-103348086531708/AnsiballZ_setup.py'
Nov 23 20:35:30 compute-1 sudo[56167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:30 compute-1 python3.9[56169]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:35:30 compute-1 sudo[56167]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:30 compute-1 sudo[56321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnqvofemgdtzgaqjnafthresntfenudz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930130.7520616-441-18108449908221/AnsiballZ_stat.py'
Nov 23 20:35:30 compute-1 sudo[56321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:31 compute-1 python3.9[56323]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:35:31 compute-1 sudo[56321]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:31 compute-1 sudo[56473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhloozdesdipvmzmfaafptkvbgecfqoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930131.6241899-468-273369215432379/AnsiballZ_stat.py'
Nov 23 20:35:31 compute-1 sudo[56473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:32 compute-1 python3.9[56475]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:35:32 compute-1 sudo[56473]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:32 compute-1 sudo[56625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkkrqwahdkqnrucvdojsovxjmqvsdlrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930132.5364547-498-260592611001602/AnsiballZ_command.py'
Nov 23 20:35:32 compute-1 sudo[56625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:32 compute-1 python3.9[56627]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:35:32 compute-1 sudo[56625]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:33 compute-1 sudo[56778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zudugtedmlknkzbofiahlsygxacwslaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930133.4217985-528-128557872037958/AnsiballZ_service_facts.py'
Nov 23 20:35:33 compute-1 sudo[56778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:34 compute-1 python3.9[56780]: ansible-service_facts Invoked
Nov 23 20:35:34 compute-1 network[56797]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 20:35:34 compute-1 network[56798]: 'network-scripts' will be removed from distribution in near future.
Nov 23 20:35:34 compute-1 network[56799]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 20:35:37 compute-1 sudo[56778]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:38 compute-1 sshd-session[56934]: Received disconnect from 118.145.189.160 port 55970:11: Bye Bye [preauth]
Nov 23 20:35:38 compute-1 sshd-session[56934]: Disconnected from authenticating user root 118.145.189.160 port 55970 [preauth]
Nov 23 20:35:39 compute-1 sudo[57084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejxpvwboaewvdxynhkginxfscwtspvgh ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1763930138.8256326-573-188072208756485/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1763930138.8256326-573-188072208756485/args'
Nov 23 20:35:39 compute-1 sudo[57084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:39 compute-1 sudo[57084]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:39 compute-1 sudo[57251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtdrtqkidyognrmawmbhrwrxnpmntlll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930139.706996-606-85391610439065/AnsiballZ_dnf.py'
Nov 23 20:35:39 compute-1 sudo[57251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:40 compute-1 python3.9[57253]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 20:35:41 compute-1 sudo[57251]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:42 compute-1 sudo[57404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgcdpallmyyosyxdanzagfdoezbaxgaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930142.1204705-645-270993330975470/AnsiballZ_package_facts.py'
Nov 23 20:35:42 compute-1 sudo[57404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:42 compute-1 python3.9[57406]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 23 20:35:43 compute-1 sudo[57404]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:44 compute-1 sudo[57556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scihimiwumoyoejlhlmqwxtlqnrgxelr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930143.909261-675-74030862214967/AnsiballZ_stat.py'
Nov 23 20:35:44 compute-1 sudo[57556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:44 compute-1 python3.9[57558]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:35:44 compute-1 sudo[57556]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:44 compute-1 sudo[57681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwxwphwfilifkiqerbpglswlemqwiamo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930143.909261-675-74030862214967/AnsiballZ_copy.py'
Nov 23 20:35:44 compute-1 sudo[57681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:45 compute-1 python3.9[57683]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930143.909261-675-74030862214967/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:35:45 compute-1 sudo[57681]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:45 compute-1 sudo[57835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoxumbgtyneshtlzbrvczgqsxcvvozdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930145.5873191-721-83237459869671/AnsiballZ_stat.py'
Nov 23 20:35:45 compute-1 sudo[57835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:46 compute-1 python3.9[57837]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:35:46 compute-1 sudo[57835]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:46 compute-1 sudo[57960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovrlkdfpwubhypslbdpnkamwshwamqkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930145.5873191-721-83237459869671/AnsiballZ_copy.py'
Nov 23 20:35:46 compute-1 sudo[57960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:46 compute-1 python3.9[57962]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930145.5873191-721-83237459869671/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:35:46 compute-1 sudo[57960]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:48 compute-1 sudo[58114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bseyuykvynfcuuxjgorghtpvgyzczstu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930147.9142797-784-126269848194610/AnsiballZ_lineinfile.py'
Nov 23 20:35:48 compute-1 sudo[58114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:48 compute-1 python3.9[58116]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:35:48 compute-1 sudo[58114]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:49 compute-1 sudo[58268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qybeankkldffmmfwwctddmltajxcurkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930149.6771634-829-148186993690627/AnsiballZ_setup.py'
Nov 23 20:35:49 compute-1 sudo[58268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:50 compute-1 python3.9[58270]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 20:35:50 compute-1 sudo[58268]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:50 compute-1 sudo[58352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdyluvpnxppjvexirkuogxxfouizwiry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930149.6771634-829-148186993690627/AnsiballZ_systemd.py'
Nov 23 20:35:50 compute-1 sudo[58352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:51 compute-1 python3.9[58354]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:35:51 compute-1 sudo[58352]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:53 compute-1 sudo[58506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djnzxdwoctuqvhaaawuquwcvbjgfkxcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930153.4556875-877-3006297492223/AnsiballZ_setup.py'
Nov 23 20:35:53 compute-1 sudo[58506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:53 compute-1 python3.9[58508]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 20:35:54 compute-1 sudo[58506]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:54 compute-1 sudo[58590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhswkvcuiwkshzdtturdtuwluxlgndib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930153.4556875-877-3006297492223/AnsiballZ_systemd.py'
Nov 23 20:35:54 compute-1 sudo[58590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:35:54 compute-1 python3.9[58592]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 20:35:54 compute-1 chronyd[808]: chronyd exiting
Nov 23 20:35:54 compute-1 systemd[1]: Stopping NTP client/server...
Nov 23 20:35:54 compute-1 systemd[1]: chronyd.service: Deactivated successfully.
Nov 23 20:35:54 compute-1 systemd[1]: Stopped NTP client/server.
Nov 23 20:35:54 compute-1 systemd[1]: Starting NTP client/server...
Nov 23 20:35:54 compute-1 chronyd[58600]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 23 20:35:54 compute-1 chronyd[58600]: Frequency -25.976 +/- 0.404 ppm read from /var/lib/chrony/drift
Nov 23 20:35:54 compute-1 chronyd[58600]: Loaded seccomp filter (level 2)
Nov 23 20:35:54 compute-1 systemd[1]: Started NTP client/server.
Nov 23 20:35:54 compute-1 sudo[58590]: pam_unix(sudo:session): session closed for user root
Nov 23 20:35:55 compute-1 sshd-session[53645]: Connection closed by 192.168.122.30 port 60482
Nov 23 20:35:55 compute-1 sshd-session[53642]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:35:55 compute-1 systemd[1]: session-13.scope: Deactivated successfully.
Nov 23 20:35:55 compute-1 systemd[1]: session-13.scope: Consumed 24.782s CPU time.
Nov 23 20:35:55 compute-1 systemd-logind[793]: Session 13 logged out. Waiting for processes to exit.
Nov 23 20:35:55 compute-1 systemd-logind[793]: Removed session 13.
Nov 23 20:36:00 compute-1 sshd-session[58626]: Accepted publickey for zuul from 192.168.122.30 port 37814 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 20:36:00 compute-1 systemd-logind[793]: New session 14 of user zuul.
Nov 23 20:36:01 compute-1 systemd[1]: Started Session 14 of User zuul.
Nov 23 20:36:01 compute-1 sshd-session[58626]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:36:01 compute-1 sudo[58779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jehzqkexwpnpmupkihpdsqzotfmgztbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930161.1134946-27-120003971069166/AnsiballZ_file.py'
Nov 23 20:36:01 compute-1 sudo[58779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:01 compute-1 python3.9[58781]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:36:01 compute-1 sudo[58779]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:02 compute-1 sudo[58931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkeytdxkausqxechbkzldkomdfrwvtur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930162.269778-63-249838362816358/AnsiballZ_stat.py'
Nov 23 20:36:02 compute-1 sudo[58931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:02 compute-1 python3.9[58933]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:36:02 compute-1 sudo[58931]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:03 compute-1 sudo[59054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvdtegvjxugtyywjkjtowupffcaclndi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930162.269778-63-249838362816358/AnsiballZ_copy.py'
Nov 23 20:36:03 compute-1 sudo[59054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:03 compute-1 python3.9[59056]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930162.269778-63-249838362816358/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:36:03 compute-1 sudo[59054]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:04 compute-1 sshd-session[58629]: Connection closed by 192.168.122.30 port 37814
Nov 23 20:36:04 compute-1 sshd-session[58626]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:36:04 compute-1 systemd[1]: session-14.scope: Deactivated successfully.
Nov 23 20:36:04 compute-1 systemd[1]: session-14.scope: Consumed 1.517s CPU time.
Nov 23 20:36:04 compute-1 systemd-logind[793]: Session 14 logged out. Waiting for processes to exit.
Nov 23 20:36:04 compute-1 systemd-logind[793]: Removed session 14.
Nov 23 20:36:07 compute-1 sshd-session[59081]: Received disconnect from 34.91.0.68 port 58532:11: Bye Bye [preauth]
Nov 23 20:36:07 compute-1 sshd-session[59081]: Disconnected from authenticating user root 34.91.0.68 port 58532 [preauth]
Nov 23 20:36:09 compute-1 sshd-session[59083]: Accepted publickey for zuul from 192.168.122.30 port 36742 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 20:36:09 compute-1 systemd-logind[793]: New session 15 of user zuul.
Nov 23 20:36:09 compute-1 systemd[1]: Started Session 15 of User zuul.
Nov 23 20:36:09 compute-1 sshd-session[59083]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:36:10 compute-1 python3.9[59236]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:36:11 compute-1 sudo[59390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwchqfmrrbaeqgigyprjjpezaxkqlyrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930170.9692023-60-109805004513095/AnsiballZ_file.py'
Nov 23 20:36:11 compute-1 sudo[59390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:11 compute-1 python3.9[59392]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:36:11 compute-1 sudo[59390]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:12 compute-1 sudo[59565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkclexzxvqucrvcjjxpqdghlvfndoozk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930171.9860466-84-133190198896731/AnsiballZ_stat.py'
Nov 23 20:36:12 compute-1 sudo[59565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:12 compute-1 python3.9[59567]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:36:12 compute-1 sudo[59565]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:13 compute-1 sudo[59688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgzmjazdzvxqzcpqyznckmeobohnbduv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930171.9860466-84-133190198896731/AnsiballZ_copy.py'
Nov 23 20:36:13 compute-1 sudo[59688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:13 compute-1 python3.9[59690]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1763930171.9860466-84-133190198896731/.source.json _original_basename=.ufs97lv7 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:36:13 compute-1 sudo[59688]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:14 compute-1 sudo[59840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwlwnlynkrhvgelgsjktsyhsidgmdgfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930174.028121-153-162254744154337/AnsiballZ_stat.py'
Nov 23 20:36:14 compute-1 sudo[59840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:14 compute-1 python3.9[59842]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:36:14 compute-1 sudo[59840]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:14 compute-1 sudo[59963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyxjnezbyedhwardvfqktuqyljovaduw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930174.028121-153-162254744154337/AnsiballZ_copy.py'
Nov 23 20:36:14 compute-1 sudo[59963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:14 compute-1 python3.9[59965]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930174.028121-153-162254744154337/.source _original_basename=.a0_52iof follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:36:15 compute-1 sudo[59963]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:15 compute-1 sudo[60115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtfudnuqxqpwzooekwmabsozmpewqfad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930175.565757-201-7995302340908/AnsiballZ_file.py'
Nov 23 20:36:15 compute-1 sudo[60115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:16 compute-1 python3.9[60117]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:36:16 compute-1 sudo[60115]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:16 compute-1 sudo[60267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-murrkeyvgmfnfbusmbodlxccapsftixl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930176.3679996-225-226355941565623/AnsiballZ_stat.py'
Nov 23 20:36:16 compute-1 sudo[60267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:16 compute-1 python3.9[60269]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:36:16 compute-1 sudo[60267]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:17 compute-1 sudo[60390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vraxbrtfhdkkjtshpsibfmmfoczyiczp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930176.3679996-225-226355941565623/AnsiballZ_copy.py'
Nov 23 20:36:17 compute-1 sudo[60390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:17 compute-1 python3.9[60392]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763930176.3679996-225-226355941565623/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:36:17 compute-1 sudo[60390]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:17 compute-1 sudo[60542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quzxlowhifapmjdsreirmexxwemgbpdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930177.4729695-225-263150874099607/AnsiballZ_stat.py'
Nov 23 20:36:17 compute-1 sudo[60542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:17 compute-1 python3.9[60544]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:36:17 compute-1 sudo[60542]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:18 compute-1 sudo[60665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blgiujpaldvcgbxzzrduedkcdymwrego ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930177.4729695-225-263150874099607/AnsiballZ_copy.py'
Nov 23 20:36:18 compute-1 sudo[60665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:18 compute-1 python3.9[60667]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763930177.4729695-225-263150874099607/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:36:18 compute-1 sudo[60665]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:19 compute-1 sudo[60817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smfasjcvxkbzklclemschzbmyrupshlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930179.269688-312-38388903448084/AnsiballZ_file.py'
Nov 23 20:36:19 compute-1 sudo[60817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:19 compute-1 python3.9[60819]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:36:19 compute-1 sudo[60817]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:20 compute-1 sudo[60969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwbzquyxhcgivwltrbmocdttntyrzgvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930180.1015997-336-137795132502481/AnsiballZ_stat.py'
Nov 23 20:36:20 compute-1 sudo[60969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:20 compute-1 python3.9[60971]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:36:20 compute-1 sudo[60969]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:20 compute-1 sudo[61092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kciyonqlpjjwjkwccykjxmqzkibusqll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930180.1015997-336-137795132502481/AnsiballZ_copy.py'
Nov 23 20:36:20 compute-1 sudo[61092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:21 compute-1 python3.9[61094]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930180.1015997-336-137795132502481/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:36:21 compute-1 sudo[61092]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:21 compute-1 sudo[61244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkmpowxyabrijftfurrtptlsoklsaagh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930181.6746585-381-236475053013401/AnsiballZ_stat.py'
Nov 23 20:36:21 compute-1 sudo[61244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:22 compute-1 python3.9[61246]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:36:22 compute-1 sudo[61244]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:22 compute-1 sudo[61367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnajlhswsretqzahrjboepqifyzauvgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930181.6746585-381-236475053013401/AnsiballZ_copy.py'
Nov 23 20:36:22 compute-1 sudo[61367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:22 compute-1 python3.9[61369]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930181.6746585-381-236475053013401/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:36:22 compute-1 sudo[61367]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:23 compute-1 sudo[61519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxgaluflahcbxgmzkhhijgishmtnmqnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930183.1912656-426-42469909480272/AnsiballZ_systemd.py'
Nov 23 20:36:23 compute-1 sudo[61519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:24 compute-1 python3.9[61521]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:36:24 compute-1 systemd[1]: Reloading.
Nov 23 20:36:24 compute-1 systemd-rc-local-generator[61549]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:36:24 compute-1 systemd-sysv-generator[61552]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:36:24 compute-1 systemd[1]: Reloading.
Nov 23 20:36:24 compute-1 systemd-sysv-generator[61589]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:36:24 compute-1 systemd-rc-local-generator[61585]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:36:24 compute-1 systemd[1]: Starting EDPM Container Shutdown...
Nov 23 20:36:24 compute-1 systemd[1]: Finished EDPM Container Shutdown.
Nov 23 20:36:24 compute-1 sudo[61519]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:25 compute-1 sudo[61746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eakbqrrkvbzvjdntzwqqgctwcpzffrls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930184.9895394-450-224800953782196/AnsiballZ_stat.py'
Nov 23 20:36:25 compute-1 sudo[61746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:25 compute-1 python3.9[61748]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:36:25 compute-1 sudo[61746]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:25 compute-1 sudo[61869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcgzajfvhpfxjqiudnpxdbivihqkyzse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930184.9895394-450-224800953782196/AnsiballZ_copy.py'
Nov 23 20:36:25 compute-1 sudo[61869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:25 compute-1 python3.9[61871]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930184.9895394-450-224800953782196/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:36:26 compute-1 sudo[61869]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:26 compute-1 sudo[62021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhqyyfialtkeoiwoqtpzlidmoaxbocwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930186.6092234-495-94860639556863/AnsiballZ_stat.py'
Nov 23 20:36:26 compute-1 sudo[62021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:27 compute-1 python3.9[62023]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:36:27 compute-1 sudo[62021]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:27 compute-1 sudo[62144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agpymnvwetdqpntossnimsurgnfokhzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930186.6092234-495-94860639556863/AnsiballZ_copy.py'
Nov 23 20:36:27 compute-1 sudo[62144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:27 compute-1 python3.9[62146]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930186.6092234-495-94860639556863/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:36:27 compute-1 sudo[62144]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:28 compute-1 sudo[62296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwihixavmggteycapwafzcqmcmhcgmxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930188.093558-540-202865164418707/AnsiballZ_systemd.py'
Nov 23 20:36:28 compute-1 sudo[62296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:28 compute-1 python3.9[62298]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:36:28 compute-1 systemd[1]: Reloading.
Nov 23 20:36:28 compute-1 systemd-rc-local-generator[62321]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:36:28 compute-1 systemd-sysv-generator[62328]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:36:28 compute-1 systemd[1]: Reloading.
Nov 23 20:36:28 compute-1 systemd-rc-local-generator[62366]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:36:28 compute-1 systemd-sysv-generator[62369]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:36:29 compute-1 sshd-session[62335]: Invalid user solv from 161.35.179.103 port 56090
Nov 23 20:36:29 compute-1 systemd[1]: Starting Create netns directory...
Nov 23 20:36:29 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 23 20:36:29 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 23 20:36:29 compute-1 systemd[1]: Finished Create netns directory.
Nov 23 20:36:29 compute-1 sudo[62296]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:29 compute-1 sshd-session[62335]: Connection closed by invalid user solv 161.35.179.103 port 56090 [preauth]
Nov 23 20:36:30 compute-1 sshd-session[62333]: Invalid user usuario from 43.225.142.116 port 39074
Nov 23 20:36:30 compute-1 python3.9[62527]: ansible-ansible.builtin.service_facts Invoked
Nov 23 20:36:30 compute-1 network[62544]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 20:36:30 compute-1 network[62545]: 'network-scripts' will be removed from distribution in near future.
Nov 23 20:36:30 compute-1 network[62546]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 20:36:30 compute-1 sshd-session[62333]: Received disconnect from 43.225.142.116 port 39074:11: Bye Bye [preauth]
Nov 23 20:36:30 compute-1 sshd-session[62333]: Disconnected from invalid user usuario 43.225.142.116 port 39074 [preauth]
Nov 23 20:36:35 compute-1 sudo[62806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbgudprnyoaegrcjtkcrscapvusimhth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930195.5375688-588-42129217554138/AnsiballZ_systemd.py'
Nov 23 20:36:35 compute-1 sudo[62806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:36 compute-1 python3.9[62808]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:36:36 compute-1 systemd[1]: Reloading.
Nov 23 20:36:36 compute-1 systemd-rc-local-generator[62838]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:36:36 compute-1 systemd-sysv-generator[62842]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:36:36 compute-1 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 23 20:36:36 compute-1 iptables.init[62850]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 23 20:36:36 compute-1 iptables.init[62850]: iptables: Flushing firewall rules: [  OK  ]
Nov 23 20:36:36 compute-1 systemd[1]: iptables.service: Deactivated successfully.
Nov 23 20:36:36 compute-1 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 23 20:36:36 compute-1 sudo[62806]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:37 compute-1 sudo[63044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxlwpuhgwldmcokuhkmgyguvpgvjftgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930196.8422897-588-203198971694912/AnsiballZ_systemd.py'
Nov 23 20:36:37 compute-1 sudo[63044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:38 compute-1 python3.9[63046]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:36:38 compute-1 sudo[63044]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:38 compute-1 sudo[63198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfsiunemgjecetywukgbzaxtnogsugyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930198.4705858-636-148586032182533/AnsiballZ_systemd.py'
Nov 23 20:36:38 compute-1 sudo[63198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:39 compute-1 python3.9[63200]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:36:39 compute-1 systemd[1]: Reloading.
Nov 23 20:36:39 compute-1 systemd-rc-local-generator[63230]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:36:39 compute-1 systemd-sysv-generator[63233]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:36:39 compute-1 systemd[1]: Starting Netfilter Tables...
Nov 23 20:36:39 compute-1 systemd[1]: Finished Netfilter Tables.
Nov 23 20:36:39 compute-1 sudo[63198]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:40 compute-1 sudo[63390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvomxboiwqfgmwieozxqaaugayelbqkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930199.858117-660-272204662104219/AnsiballZ_command.py'
Nov 23 20:36:40 compute-1 sudo[63390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:40 compute-1 python3.9[63392]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:36:40 compute-1 sudo[63390]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:41 compute-1 sudo[63543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hduavitbyfdxnhckyapaozyhwghnwqkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930201.156135-702-179355839405064/AnsiballZ_stat.py'
Nov 23 20:36:41 compute-1 sudo[63543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:41 compute-1 python3.9[63545]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:36:41 compute-1 sudo[63543]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:42 compute-1 sudo[63668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vieipevqgoynzatwxxmtkygornseebhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930201.156135-702-179355839405064/AnsiballZ_copy.py'
Nov 23 20:36:42 compute-1 sudo[63668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:42 compute-1 python3.9[63671]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930201.156135-702-179355839405064/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:36:42 compute-1 sudo[63668]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:42 compute-1 sudo[63823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svbdomntglsrnctgmxsszrpndlqkusyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930202.7492106-747-228733672760814/AnsiballZ_systemd.py'
Nov 23 20:36:42 compute-1 sudo[63823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:43 compute-1 sshd-session[63669]: Received disconnect from 102.176.81.29 port 32862:11: Bye Bye [preauth]
Nov 23 20:36:43 compute-1 sshd-session[63669]: Disconnected from authenticating user root 102.176.81.29 port 32862 [preauth]
Nov 23 20:36:43 compute-1 python3.9[63825]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 20:36:43 compute-1 systemd[1]: Reloading OpenSSH server daemon...
Nov 23 20:36:43 compute-1 sshd[1005]: Received SIGHUP; restarting.
Nov 23 20:36:43 compute-1 sshd[1005]: Server listening on 0.0.0.0 port 22.
Nov 23 20:36:43 compute-1 sshd[1005]: Server listening on :: port 22.
Nov 23 20:36:43 compute-1 systemd[1]: Reloaded OpenSSH server daemon.
Nov 23 20:36:43 compute-1 sudo[63823]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:43 compute-1 sudo[63979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naawjeogenncviodhvbdqvwczgizdaqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930203.6909914-771-51569382658904/AnsiballZ_file.py'
Nov 23 20:36:43 compute-1 sudo[63979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:44 compute-1 python3.9[63981]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:36:44 compute-1 sudo[63979]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:44 compute-1 sudo[64131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzrlnczulumwvqnhvhgujknwfzmjoqyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930204.4876158-795-156621429705602/AnsiballZ_stat.py'
Nov 23 20:36:44 compute-1 sudo[64131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:44 compute-1 python3.9[64133]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:36:44 compute-1 sudo[64131]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:45 compute-1 sudo[64254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acibqfrwkittygtnqbuattzwxvaqrioq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930204.4876158-795-156621429705602/AnsiballZ_copy.py'
Nov 23 20:36:45 compute-1 sudo[64254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:45 compute-1 python3.9[64256]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930204.4876158-795-156621429705602/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:36:45 compute-1 sudo[64254]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:46 compute-1 sudo[64406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcacwtokumjfbuelcmepcuqgkempfmgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930206.2761776-849-234463555685400/AnsiballZ_timezone.py'
Nov 23 20:36:46 compute-1 sudo[64406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:46 compute-1 python3.9[64408]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 23 20:36:46 compute-1 systemd[1]: Starting Time & Date Service...
Nov 23 20:36:46 compute-1 systemd[1]: Started Time & Date Service.
Nov 23 20:36:47 compute-1 sudo[64406]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:47 compute-1 sudo[64562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfdbtizcakoyzysmbaenmpujcfhnxjvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930207.4651053-876-247524375117626/AnsiballZ_file.py'
Nov 23 20:36:47 compute-1 sudo[64562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:47 compute-1 python3.9[64564]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:36:47 compute-1 sudo[64562]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:48 compute-1 sudo[64714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyossmxsunctwqroykklazwxpijscegu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930208.308962-901-189564753156378/AnsiballZ_stat.py'
Nov 23 20:36:48 compute-1 sudo[64714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:48 compute-1 python3.9[64716]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:36:48 compute-1 sudo[64714]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:49 compute-1 sudo[64837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnhqmdwrypvggptytjfjvlrmiccxuaio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930208.308962-901-189564753156378/AnsiballZ_copy.py'
Nov 23 20:36:49 compute-1 sudo[64837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:49 compute-1 python3.9[64839]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930208.308962-901-189564753156378/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:36:49 compute-1 sudo[64837]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:50 compute-1 sudo[64989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpfndamrdmcsafqnsjfkkropduuxzais ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930209.883207-946-206658909949110/AnsiballZ_stat.py'
Nov 23 20:36:50 compute-1 sudo[64989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:50 compute-1 python3.9[64991]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:36:50 compute-1 sudo[64989]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:50 compute-1 sudo[65112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lleuhjuodcjzrdorzulkhagucvxrtnug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930209.883207-946-206658909949110/AnsiballZ_copy.py'
Nov 23 20:36:50 compute-1 sudo[65112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:50 compute-1 python3.9[65114]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930209.883207-946-206658909949110/.source.yaml _original_basename=.tle0i_zh follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:36:50 compute-1 sudo[65112]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:51 compute-1 sudo[65264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjzsamjctsmvuegpfojhmbptncdjkisf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930211.3898401-990-162395279395576/AnsiballZ_stat.py'
Nov 23 20:36:51 compute-1 sudo[65264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:51 compute-1 python3.9[65266]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:36:51 compute-1 sudo[65264]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:52 compute-1 sudo[65387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrblwfqaiuloqgonhsmsqtiyjgxoqxqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930211.3898401-990-162395279395576/AnsiballZ_copy.py'
Nov 23 20:36:52 compute-1 sudo[65387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:52 compute-1 python3.9[65389]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930211.3898401-990-162395279395576/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:36:52 compute-1 sudo[65387]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:53 compute-1 sudo[65539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyetpdahrzzsfitndppqrmvmeycqkxor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930213.0309834-1035-235510948932144/AnsiballZ_command.py'
Nov 23 20:36:53 compute-1 sudo[65539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:53 compute-1 python3.9[65541]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:36:53 compute-1 sudo[65539]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:54 compute-1 sudo[65694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkzqigbvvzzpdzndvbrujekagmqcvxtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930213.8404713-1059-236648169738359/AnsiballZ_command.py'
Nov 23 20:36:54 compute-1 sudo[65694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:54 compute-1 python3.9[65696]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:36:54 compute-1 sudo[65694]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:54 compute-1 sshd-session[65542]: Invalid user min from 118.145.189.160 port 59072
Nov 23 20:36:54 compute-1 sshd-session[65542]: Received disconnect from 118.145.189.160 port 59072:11: Bye Bye [preauth]
Nov 23 20:36:54 compute-1 sshd-session[65542]: Disconnected from invalid user min 118.145.189.160 port 59072 [preauth]
Nov 23 20:36:55 compute-1 sudo[65847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyrxuypuhmifswrqbdnnunoyvqntdrdx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763930214.6455188-1083-141234158147947/AnsiballZ_edpm_nftables_from_files.py'
Nov 23 20:36:55 compute-1 sudo[65847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:55 compute-1 python3[65849]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 23 20:36:55 compute-1 sudo[65847]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:56 compute-1 sudo[65999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynohjczbpnqeucmoiunfcyfjxpzrjkzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930215.6492622-1107-104058532410110/AnsiballZ_stat.py'
Nov 23 20:36:56 compute-1 sudo[65999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:56 compute-1 python3.9[66001]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:36:56 compute-1 sudo[65999]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:56 compute-1 sudo[66122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgyexcbnuqedctidtnryclewwtmcyauh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930215.6492622-1107-104058532410110/AnsiballZ_copy.py'
Nov 23 20:36:56 compute-1 sudo[66122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:56 compute-1 python3.9[66124]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930215.6492622-1107-104058532410110/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:36:56 compute-1 sudo[66122]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:57 compute-1 sudo[66274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbsyapjxoodhzmyyzrjunwpswhlikuhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930217.1473029-1152-229716375945703/AnsiballZ_stat.py'
Nov 23 20:36:57 compute-1 sudo[66274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:57 compute-1 python3.9[66276]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:36:57 compute-1 sudo[66274]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:58 compute-1 sudo[66397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-likxpgmujsxvlpdcnwnhmkdxvpnxrjdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930217.1473029-1152-229716375945703/AnsiballZ_copy.py'
Nov 23 20:36:58 compute-1 sudo[66397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:58 compute-1 python3.9[66399]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930217.1473029-1152-229716375945703/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:36:58 compute-1 sudo[66397]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:59 compute-1 sudo[66549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjcnnisetedcrccpxehitabsqdendsub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930218.7269976-1197-74018833198880/AnsiballZ_stat.py'
Nov 23 20:36:59 compute-1 sudo[66549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:59 compute-1 python3.9[66551]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:36:59 compute-1 sudo[66549]: pam_unix(sudo:session): session closed for user root
Nov 23 20:36:59 compute-1 sudo[66672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upiqhvhkchvrqxlihrttnogwdhecokuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930218.7269976-1197-74018833198880/AnsiballZ_copy.py'
Nov 23 20:36:59 compute-1 sudo[66672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:36:59 compute-1 python3.9[66674]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930218.7269976-1197-74018833198880/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:36:59 compute-1 sudo[66672]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:00 compute-1 sudo[66824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbprtfipzoyfojqflnjpvkdjzfibyyvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930220.3910675-1242-86263106829677/AnsiballZ_stat.py'
Nov 23 20:37:00 compute-1 sudo[66824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:00 compute-1 python3.9[66826]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:37:00 compute-1 sudo[66824]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:01 compute-1 sudo[66947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iibeliyzvfcotskolbmxtfrrtoycxvle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930220.3910675-1242-86263106829677/AnsiballZ_copy.py'
Nov 23 20:37:01 compute-1 sudo[66947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:01 compute-1 python3.9[66949]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930220.3910675-1242-86263106829677/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:37:01 compute-1 sudo[66947]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:02 compute-1 sudo[67099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcfnsakcgyihbfqqjspmljzfkmhdhlkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930222.0214162-1287-184737454912534/AnsiballZ_stat.py'
Nov 23 20:37:02 compute-1 sudo[67099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:02 compute-1 python3.9[67101]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:37:02 compute-1 sudo[67099]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:02 compute-1 sudo[67222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmdllktdhdomqyjnwnarwwavaqixmzza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930222.0214162-1287-184737454912534/AnsiballZ_copy.py'
Nov 23 20:37:02 compute-1 sudo[67222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:03 compute-1 python3.9[67224]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930222.0214162-1287-184737454912534/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:37:03 compute-1 sudo[67222]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:04 compute-1 sudo[67374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpmjprxippobulqmvgamtoxngbjpqvvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930223.7938256-1332-207532591839860/AnsiballZ_file.py'
Nov 23 20:37:04 compute-1 sudo[67374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:04 compute-1 python3.9[67376]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:37:04 compute-1 sudo[67374]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:04 compute-1 sudo[67526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exxgnbamdqbpmvvgiyvnpukdnvvqaclj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930224.6468337-1357-119370889590841/AnsiballZ_command.py'
Nov 23 20:37:04 compute-1 sudo[67526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:05 compute-1 python3.9[67528]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:37:05 compute-1 sudo[67526]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:06 compute-1 sudo[67685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbbtprnschpotundfjubnsipjmshajpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930225.630851-1380-198002282868126/AnsiballZ_blockinfile.py'
Nov 23 20:37:06 compute-1 sudo[67685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:06 compute-1 python3.9[67687]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:37:06 compute-1 sudo[67685]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:07 compute-1 sudo[67838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kftjwckggrahqshsgbchrqpkasevimdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930227.0029097-1407-257589883078517/AnsiballZ_file.py'
Nov 23 20:37:07 compute-1 sudo[67838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:07 compute-1 python3.9[67840]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:37:07 compute-1 sudo[67838]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:07 compute-1 sudo[67990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpkiazagwtwonoeyjxgymxysvftdbbld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930227.6524599-1407-255058913943281/AnsiballZ_file.py'
Nov 23 20:37:07 compute-1 sudo[67990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:08 compute-1 python3.9[67992]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:37:08 compute-1 sudo[67990]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:09 compute-1 sudo[68144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghosiuzyyybvcyongvmjhlarvkqpnsli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930228.663547-1452-251493928995700/AnsiballZ_mount.py'
Nov 23 20:37:09 compute-1 sudo[68144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:09 compute-1 python3.9[68146]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 23 20:37:09 compute-1 sudo[68144]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:09 compute-1 sshd-session[68098]: Invalid user gerrit from 34.91.0.68 port 60502
Nov 23 20:37:09 compute-1 sshd-session[68098]: Received disconnect from 34.91.0.68 port 60502:11: Bye Bye [preauth]
Nov 23 20:37:09 compute-1 sshd-session[68098]: Disconnected from invalid user gerrit 34.91.0.68 port 60502 [preauth]
Nov 23 20:37:09 compute-1 sudo[68297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmeyuaskxhgjtwhieodwbwazjwvtvpvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930229.5240798-1452-240293057649458/AnsiballZ_mount.py'
Nov 23 20:37:09 compute-1 sudo[68297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:10 compute-1 python3.9[68299]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 23 20:37:10 compute-1 sudo[68297]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:10 compute-1 sshd-session[59086]: Connection closed by 192.168.122.30 port 36742
Nov 23 20:37:10 compute-1 sshd-session[59083]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:37:10 compute-1 systemd[1]: session-15.scope: Deactivated successfully.
Nov 23 20:37:10 compute-1 systemd[1]: session-15.scope: Consumed 32.976s CPU time.
Nov 23 20:37:10 compute-1 systemd-logind[793]: Session 15 logged out. Waiting for processes to exit.
Nov 23 20:37:10 compute-1 systemd-logind[793]: Removed session 15.
Nov 23 20:37:15 compute-1 sshd-session[68325]: Accepted publickey for zuul from 192.168.122.30 port 60650 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 20:37:15 compute-1 systemd-logind[793]: New session 16 of user zuul.
Nov 23 20:37:15 compute-1 systemd[1]: Started Session 16 of User zuul.
Nov 23 20:37:15 compute-1 sshd-session[68325]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:37:16 compute-1 sudo[68478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajetdffihuculmvmnryczuwdfeyaopmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930235.8432598-19-76159970517053/AnsiballZ_tempfile.py'
Nov 23 20:37:16 compute-1 sudo[68478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:16 compute-1 python3.9[68480]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 23 20:37:16 compute-1 sudo[68478]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:17 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 23 20:37:17 compute-1 sudo[68632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olddolxzplzjwuwrwoivtxgvvyhwwjgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930236.9685738-55-277961710233745/AnsiballZ_stat.py'
Nov 23 20:37:17 compute-1 sudo[68632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:17 compute-1 python3.9[68634]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:37:17 compute-1 sudo[68632]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:18 compute-1 sudo[68784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiyeizahiybmrcdhlorccurdxzpzwvvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930237.9715402-85-4246892981876/AnsiballZ_setup.py'
Nov 23 20:37:18 compute-1 sudo[68784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:18 compute-1 python3.9[68786]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:37:18 compute-1 sudo[68784]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:19 compute-1 sudo[68936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-culvkrtmogihyutbqwetewausjzjegfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930239.2823837-110-165912955003798/AnsiballZ_blockinfile.py'
Nov 23 20:37:19 compute-1 sudo[68936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:19 compute-1 python3.9[68938]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZyfELJX7KkP8E4Yo+r9guKNy64TSJDfB+rBUAclCyKwGxjxhBTRAJJCOL6kSBIkbUub9LTNVh+s271jrKlK1rYs22c1DFe3ci9hBERauX4lIaBHw9kJBHURb9cB+VbonXf0hAdqGDLTXdqFnbed2oU0ngSuVesO/C9+SCSZFsfERuUe3/SXKbWfjehgYTi4GquXo6Ynq1HopME6mRR8qGsv6sgdkxpSaUiwtSBG5ONOSyzrev1t2hdDsRxvbZAZgV2ab6IMD9DTKaIXphHpumL6txas+nKViUfm+gW6p6EKNdHb/VLha7ghY3p4LE3OdXM4eytxszF0Fzs/0CXzafNxHjVjHzqxrJBi/PT22i6QD60NTimabHulw8IkZG6KsuNVq1rmlSSGQGjqAs7l6hNH8kF4uq1JwOl6mVgct5iE+ZzhfO5WRWShiE1LlCZpqdYE9VqmBrK5r70N0srW3h2mb4lTAwvC089Vert64D29M7riepyGCrGInpE4aK7Sk=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIFop+sR8mOkxOfCCMKg8Voa+6Ns0zHMRLKg+WdnL56v
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFQ0Rj0/OjRh0AQLkOX0VueFFf3xD5FqSzewSN/8R0Xh0Ybf7bkNUGszKaTkKSUBKR2e9V/GwA+BxEChWtzU3sY=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrfRiqah4FSYlin2mt3PYchMDfWNjxPXqcCCW7iymA93OXZ1reX9dxsJRSssuxIkwaYv7OC+wrUmMOsDhULhy9uNDku8TnHodZVNms8z3UwQW2GPePqEdQ56rKSJ5DhpY0ly7PapOQ69jitmBGQjsu8go19hV3djXlFm1du9V1HMnfGqyr5REZ5ACjW2Rr0108gdYgrt/xh+1sl7cgixK0vUKaqN47/VJHXSTk20aXknt5lhurSKMbRD4cgP1pz0lBJ8LfEvFajLlXBk7MtsI8L94qtHH20hWUk8P2FmqsM4LoLIY4YkAT6kzDPkNdC5F3bpl67NzNXKLdStChVsjRVgrsR0JhU4YO8nYPSqn85KWQUMsuQhXfeMPb5a0n4vSmF0hQhaTctIIK5Yq+qK3S5Ee0tV+ZLMcrYiRfVJYjULh+8LazeUYBtZAVkOoenlHNpcxfVl2v8Fx37PYu6wY/1Ol7i+Fyg+DMculPNu0E00hYIfuSPW06sm98V0zJ7bs=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC0+oolG6Djq6MTp/HXh3SEc2a8aDRu5q8AnCiNHx/fN
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC1GCZqvti/wHDh2Oo7NSAFToY/dykBAXL2bgJmg9kqKO2qTzfIYtCRiGP/x9yaw+D3ymaftMgdHgFkzRtYcXz0=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCo3+sqhh74Wal6wWv19BRNHNnjTPYKculYCUftHSfYmbg5LryLTnsWAJdalXVBYQIJtq5uFrJRBG4C0R1XMU/MT4ZxuTtafwAzeTnKoCHbN/+mH31bndpvGKYRQ9AQHmamquyDQaSEjIYKFaK6eM7uVV/PaSZqasrB6awv3MeDH/GhtlyJwY7ble8M3UtG9jMWuPq/qX+TnKCZI3COyKBCe7F3aeaIewsho+T7qsRd8UNr55SHWJ1N6xYtA4FUayJ4cCZUeo4+SOJuQWb6A3HZm75y0LpdLDFH54DqyDqKVvDUfaKJJQV++3GT9kF9+jrwJDEK9VslSlEylLZ0zg1J0Z2zyMOwOAxBKEUXQNymC+00ybwJd4trP7KDy6+ZGOtHEThBgVO6vtuxQLWhseNa3otNXh7cHTf+Jfo7uo1wHbasd6aD1AVxvt4yKgOGy1ypt9Ps/COlbfHHFYZsI5gVLyJyK8aeipUjJUe6u6Qlf/F/inV1rwRBg8li7oeW7Ss=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFE96kcIFDgsK09K4ZL9HihPRGUmf4YDgXlXqtYy0M8r
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJoWf98fFp9mmY0S22K7n+FjL7cDYCGLm8eglORId7ZBFp9PG5e8P+ws6VWjBbceNazmskqBYurrlrsvB4Mu40E=
                                             create=True mode=0644 path=/tmp/ansible.pb_ans4v state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:37:19 compute-1 sudo[68936]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:20 compute-1 sudo[69088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foamvalysqbxjrxdjegbpznfvelpbjwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930240.2556922-134-4183765886215/AnsiballZ_command.py'
Nov 23 20:37:20 compute-1 sudo[69088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:20 compute-1 python3.9[69090]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.pb_ans4v' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:37:20 compute-1 sudo[69088]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:21 compute-1 sudo[69242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwofzroghkeqxxxtborywtcyntpllrxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930241.2456083-158-171488634386382/AnsiballZ_file.py'
Nov 23 20:37:21 compute-1 sudo[69242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:21 compute-1 python3.9[69244]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.pb_ans4v state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:37:21 compute-1 sudo[69242]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:22 compute-1 sshd-session[68328]: Connection closed by 192.168.122.30 port 60650
Nov 23 20:37:22 compute-1 sshd-session[68325]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:37:22 compute-1 systemd[1]: session-16.scope: Deactivated successfully.
Nov 23 20:37:22 compute-1 systemd[1]: session-16.scope: Consumed 3.363s CPU time.
Nov 23 20:37:22 compute-1 systemd-logind[793]: Session 16 logged out. Waiting for processes to exit.
Nov 23 20:37:22 compute-1 systemd-logind[793]: Removed session 16.
Nov 23 20:37:27 compute-1 sshd-session[69270]: Accepted publickey for zuul from 192.168.122.30 port 56198 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 20:37:27 compute-1 systemd-logind[793]: New session 17 of user zuul.
Nov 23 20:37:27 compute-1 systemd[1]: Started Session 17 of User zuul.
Nov 23 20:37:27 compute-1 sshd-session[69270]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:37:28 compute-1 python3.9[69423]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:37:30 compute-1 sudo[69577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekaquykphntbpzqgyzxhgkgvaatllieg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930249.456124-57-9043022584504/AnsiballZ_systemd.py'
Nov 23 20:37:30 compute-1 sudo[69577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:30 compute-1 python3.9[69579]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 23 20:37:30 compute-1 sudo[69577]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:30 compute-1 sudo[69731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkganxaxmlmgypqhpqvwcxsfnwhbyszp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930250.7062395-81-47104034490601/AnsiballZ_systemd.py'
Nov 23 20:37:30 compute-1 sudo[69731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:31 compute-1 python3.9[69733]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 20:37:31 compute-1 sudo[69731]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:32 compute-1 sudo[69886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwcyyepixiepqxjzzgegkfylfqizouwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930251.7417583-108-121004087825169/AnsiballZ_command.py'
Nov 23 20:37:32 compute-1 sudo[69886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:32 compute-1 python3.9[69888]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:37:32 compute-1 sudo[69886]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:32 compute-1 sshd-session[69734]: Invalid user local from 43.225.142.116 port 35208
Nov 23 20:37:32 compute-1 sshd-session[69734]: Received disconnect from 43.225.142.116 port 35208:11: Bye Bye [preauth]
Nov 23 20:37:32 compute-1 sshd-session[69734]: Disconnected from invalid user local 43.225.142.116 port 35208 [preauth]
Nov 23 20:37:33 compute-1 sudo[70039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcokgufgtekghuouljdzenssidflhblw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930252.666192-132-53756120696088/AnsiballZ_stat.py'
Nov 23 20:37:33 compute-1 sudo[70039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:33 compute-1 python3.9[70041]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:37:33 compute-1 sudo[70039]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:33 compute-1 sudo[70193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpikaptwoudvubpetsnxufracppsimyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930253.595613-156-134017152497192/AnsiballZ_command.py'
Nov 23 20:37:33 compute-1 sudo[70193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:34 compute-1 python3.9[70195]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:37:34 compute-1 sudo[70193]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:34 compute-1 sudo[70348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eudnxalpndorfppoiwvduedghcyeouhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930254.5440826-180-123336748223720/AnsiballZ_file.py'
Nov 23 20:37:34 compute-1 sudo[70348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:35 compute-1 python3.9[70350]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:37:35 compute-1 sudo[70348]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:35 compute-1 sshd-session[69273]: Connection closed by 192.168.122.30 port 56198
Nov 23 20:37:35 compute-1 sshd-session[69270]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:37:35 compute-1 systemd-logind[793]: Session 17 logged out. Waiting for processes to exit.
Nov 23 20:37:35 compute-1 systemd[1]: session-17.scope: Deactivated successfully.
Nov 23 20:37:35 compute-1 systemd[1]: session-17.scope: Consumed 4.186s CPU time.
Nov 23 20:37:35 compute-1 systemd-logind[793]: Removed session 17.
Nov 23 20:37:41 compute-1 sshd-session[70375]: Accepted publickey for zuul from 192.168.122.30 port 34102 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 20:37:41 compute-1 systemd-logind[793]: New session 18 of user zuul.
Nov 23 20:37:41 compute-1 systemd[1]: Started Session 18 of User zuul.
Nov 23 20:37:41 compute-1 sshd-session[70375]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:37:42 compute-1 python3.9[70528]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:37:43 compute-1 sudo[70682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuvbnksbsyeiriixkhffasmbpkkdvwoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930262.801241-63-84646532582352/AnsiballZ_setup.py'
Nov 23 20:37:43 compute-1 sudo[70682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:43 compute-1 python3.9[70684]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 20:37:43 compute-1 sudo[70682]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:44 compute-1 sudo[70768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbufsxizeffvsnqpjsfohdkwjmdsihml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930262.801241-63-84646532582352/AnsiballZ_dnf.py'
Nov 23 20:37:44 compute-1 sudo[70768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:44 compute-1 python3.9[70770]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 23 20:37:44 compute-1 sshd-session[70693]: Invalid user ethereum from 92.118.39.92 port 50130
Nov 23 20:37:44 compute-1 sshd-session[70693]: Connection closed by invalid user ethereum 92.118.39.92 port 50130 [preauth]
Nov 23 20:37:45 compute-1 sudo[70768]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:46 compute-1 python3.9[70921]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:37:47 compute-1 python3.9[71072]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 23 20:37:48 compute-1 python3.9[71222]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:37:48 compute-1 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 20:37:48 compute-1 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 20:37:49 compute-1 python3.9[71373]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:37:49 compute-1 sshd-session[70378]: Connection closed by 192.168.122.30 port 34102
Nov 23 20:37:49 compute-1 sshd-session[70375]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:37:49 compute-1 systemd[1]: session-18.scope: Deactivated successfully.
Nov 23 20:37:49 compute-1 systemd[1]: session-18.scope: Consumed 6.009s CPU time.
Nov 23 20:37:49 compute-1 systemd-logind[793]: Session 18 logged out. Waiting for processes to exit.
Nov 23 20:37:49 compute-1 systemd-logind[793]: Removed session 18.
Nov 23 20:37:58 compute-1 sshd-session[71398]: Accepted publickey for zuul from 38.102.83.13 port 58128 ssh2: RSA SHA256:vJMLYQFuuPNw0oBlCMsukcLw8e8jDo/ucmylbroLweU
Nov 23 20:37:58 compute-1 systemd-logind[793]: New session 19 of user zuul.
Nov 23 20:37:58 compute-1 systemd[1]: Started Session 19 of User zuul.
Nov 23 20:37:58 compute-1 sshd-session[71398]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:37:58 compute-1 sudo[71474]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvcvsuanwdefugxpusbskihvyyypaaqr ; /usr/bin/python3'
Nov 23 20:37:58 compute-1 sudo[71474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:58 compute-1 useradd[71478]: new group: name=ceph-admin, GID=42478
Nov 23 20:37:58 compute-1 useradd[71478]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Nov 23 20:37:58 compute-1 sudo[71474]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:59 compute-1 sudo[71560]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jehtniwjxctjlowaawzilqskjzyduavf ; /usr/bin/python3'
Nov 23 20:37:59 compute-1 sudo[71560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:59 compute-1 sudo[71560]: pam_unix(sudo:session): session closed for user root
Nov 23 20:37:59 compute-1 sudo[71633]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qurggaliqvloxdydfcnfvcbulgwnmrit ; /usr/bin/python3'
Nov 23 20:37:59 compute-1 sudo[71633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:37:59 compute-1 sudo[71633]: pam_unix(sudo:session): session closed for user root
Nov 23 20:38:00 compute-1 sudo[71683]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvgzkyuerwkigqtrlisitlvwmoyhobwt ; /usr/bin/python3'
Nov 23 20:38:00 compute-1 sudo[71683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:38:00 compute-1 sudo[71683]: pam_unix(sudo:session): session closed for user root
Nov 23 20:38:00 compute-1 sudo[71709]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zddndsfktizorevrzpoutpchmiqmmxfc ; /usr/bin/python3'
Nov 23 20:38:00 compute-1 sudo[71709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:38:00 compute-1 sudo[71709]: pam_unix(sudo:session): session closed for user root
Nov 23 20:38:01 compute-1 sudo[71737]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-perbdkacmiqqlvuhpiflrmpyhtgwlbab ; /usr/bin/python3'
Nov 23 20:38:01 compute-1 sudo[71737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:38:01 compute-1 sudo[71737]: pam_unix(sudo:session): session closed for user root
Nov 23 20:38:01 compute-1 sudo[71763]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvfoitcxupgkyfpacgkdgzzwmalzibmz ; /usr/bin/python3'
Nov 23 20:38:01 compute-1 sudo[71763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:38:01 compute-1 sshd-session[71712]: Invalid user teamspeak from 102.176.81.29 port 35410
Nov 23 20:38:01 compute-1 sudo[71763]: pam_unix(sudo:session): session closed for user root
Nov 23 20:38:02 compute-1 sshd-session[71712]: Received disconnect from 102.176.81.29 port 35410:11: Bye Bye [preauth]
Nov 23 20:38:02 compute-1 sshd-session[71712]: Disconnected from invalid user teamspeak 102.176.81.29 port 35410 [preauth]
Nov 23 20:38:02 compute-1 sudo[71841]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waajuiqjztbqcvfuawbyxypquxyltgin ; /usr/bin/python3'
Nov 23 20:38:02 compute-1 sudo[71841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:38:02 compute-1 sudo[71841]: pam_unix(sudo:session): session closed for user root
Nov 23 20:38:02 compute-1 sudo[71914]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiobjhgwinkmwpsaxmewhyudvrlyjoub ; /usr/bin/python3'
Nov 23 20:38:02 compute-1 sudo[71914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:38:02 compute-1 sudo[71914]: pam_unix(sudo:session): session closed for user root
Nov 23 20:38:03 compute-1 sudo[72016]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxxaqoivqpweftvmzktmynusficytmwu ; /usr/bin/python3'
Nov 23 20:38:03 compute-1 sudo[72016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:38:03 compute-1 sudo[72016]: pam_unix(sudo:session): session closed for user root
Nov 23 20:38:03 compute-1 sudo[72089]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxkgvxtlfbdrtuadfunzqpvhnpefdcvq ; /usr/bin/python3'
Nov 23 20:38:03 compute-1 sudo[72089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:38:03 compute-1 sudo[72089]: pam_unix(sudo:session): session closed for user root
Nov 23 20:38:04 compute-1 chronyd[58600]: Selected source 174.138.193.90 (pool.ntp.org)
Nov 23 20:38:04 compute-1 sudo[72139]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwhvqfqtlnmhycjxlkxuyuodnzjvexim ; /usr/bin/python3'
Nov 23 20:38:04 compute-1 sudo[72139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:38:04 compute-1 python3[72141]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:38:05 compute-1 sudo[72139]: pam_unix(sudo:session): session closed for user root
Nov 23 20:38:06 compute-1 sudo[72234]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wztjppbwxoosfwrvrbqjksqnlutgzcar ; /usr/bin/python3'
Nov 23 20:38:06 compute-1 sudo[72234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:38:06 compute-1 python3[72236]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 23 20:38:08 compute-1 sudo[72234]: pam_unix(sudo:session): session closed for user root
Nov 23 20:38:08 compute-1 sudo[72261]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsyojrumivsozfdvzvtuyxhscitqpwns ; /usr/bin/python3'
Nov 23 20:38:08 compute-1 sudo[72261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:38:08 compute-1 python3[72263]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 20:38:08 compute-1 sudo[72261]: pam_unix(sudo:session): session closed for user root
Nov 23 20:38:08 compute-1 sudo[72287]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rseuxsqhptewsxyaxshjzvjzabiahxcn ; /usr/bin/python3'
Nov 23 20:38:08 compute-1 sudo[72287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:38:08 compute-1 python3[72289]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:38:08 compute-1 kernel: loop: module loaded
Nov 23 20:38:08 compute-1 kernel: loop3: detected capacity change from 0 to 41943040
Nov 23 20:38:08 compute-1 sudo[72287]: pam_unix(sudo:session): session closed for user root
Nov 23 20:38:09 compute-1 sudo[72322]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ernkzndsgjwvytpqhckggwmvflypxyrw ; /usr/bin/python3'
Nov 23 20:38:09 compute-1 sudo[72322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:38:09 compute-1 python3[72324]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:38:09 compute-1 lvm[72327]: PV /dev/loop3 not used.
Nov 23 20:38:09 compute-1 lvm[72329]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 20:38:09 compute-1 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 23 20:38:09 compute-1 lvm[72332]:   1 logical volume(s) in volume group "ceph_vg0" now active
Nov 23 20:38:09 compute-1 lvm[72339]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 20:38:09 compute-1 lvm[72339]: VG ceph_vg0 finished
Nov 23 20:38:09 compute-1 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 23 20:38:09 compute-1 sudo[72322]: pam_unix(sudo:session): session closed for user root
Nov 23 20:38:10 compute-1 sudo[72415]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hylifnfmvphtqznfgrspyqooyazmkueb ; /usr/bin/python3'
Nov 23 20:38:10 compute-1 sudo[72415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:38:10 compute-1 python3[72417]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 20:38:10 compute-1 sudo[72415]: pam_unix(sudo:session): session closed for user root
Nov 23 20:38:10 compute-1 sudo[72488]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lepbtqegztbskadxzpzshcvzktwqpuyn ; /usr/bin/python3'
Nov 23 20:38:10 compute-1 sudo[72488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:38:10 compute-1 python3[72490]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763930289.8710515-36961-72011168147632/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:38:10 compute-1 sudo[72488]: pam_unix(sudo:session): session closed for user root
Nov 23 20:38:11 compute-1 sudo[72542]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-infxfitjzxrxzhlekeebysoogbkzbplv ; /usr/bin/python3'
Nov 23 20:38:11 compute-1 sudo[72542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:38:11 compute-1 python3[72544]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:38:11 compute-1 systemd[1]: Reloading.
Nov 23 20:38:11 compute-1 sshd-session[72517]: Received disconnect from 34.91.0.68 port 34246:11: Bye Bye [preauth]
Nov 23 20:38:11 compute-1 sshd-session[72517]: Disconnected from authenticating user root 34.91.0.68 port 34246 [preauth]
Nov 23 20:38:11 compute-1 systemd-rc-local-generator[72572]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:38:11 compute-1 systemd-sysv-generator[72576]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:38:11 compute-1 systemd[1]: Starting Ceph OSD losetup...
Nov 23 20:38:11 compute-1 bash[72584]: /dev/loop3: [64513]:4328000 (/var/lib/ceph-osd-0.img)
Nov 23 20:38:11 compute-1 systemd[1]: Finished Ceph OSD losetup.
Nov 23 20:38:11 compute-1 lvm[72585]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 20:38:11 compute-1 lvm[72585]: VG ceph_vg0 finished
Nov 23 20:38:11 compute-1 sudo[72542]: pam_unix(sudo:session): session closed for user root
Nov 23 20:38:12 compute-1 sshd-session[72491]: Received disconnect from 118.145.189.160 port 37358:11: Bye Bye [preauth]
Nov 23 20:38:12 compute-1 sshd-session[72491]: Disconnected from authenticating user root 118.145.189.160 port 37358 [preauth]
Nov 23 20:38:14 compute-1 python3[72609]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:38:15 compute-1 sshd-session[72653]: Connection closed by 161.35.133.66 port 41338
Nov 23 20:38:36 compute-1 sshd-session[72654]: Invalid user master from 43.225.142.116 port 59562
Nov 23 20:38:36 compute-1 sshd-session[72654]: Received disconnect from 43.225.142.116 port 59562:11: Bye Bye [preauth]
Nov 23 20:38:36 compute-1 sshd-session[72654]: Disconnected from invalid user master 43.225.142.116 port 59562 [preauth]
Nov 23 20:38:41 compute-1 sshd-session[72657]: Invalid user solv from 161.35.179.103 port 39352
Nov 23 20:38:41 compute-1 sshd-session[72657]: Connection closed by invalid user solv 161.35.179.103 port 39352 [preauth]
Nov 23 20:39:16 compute-1 sshd-session[72659]: Invalid user user from 34.91.0.68 port 36222
Nov 23 20:39:16 compute-1 sshd-session[72659]: Received disconnect from 34.91.0.68 port 36222:11: Bye Bye [preauth]
Nov 23 20:39:16 compute-1 sshd-session[72659]: Disconnected from invalid user user 34.91.0.68 port 36222 [preauth]
Nov 23 20:39:21 compute-1 sshd-session[72661]: Invalid user user1 from 102.176.81.29 port 37900
Nov 23 20:39:22 compute-1 sshd-session[72661]: Received disconnect from 102.176.81.29 port 37900:11: Bye Bye [preauth]
Nov 23 20:39:22 compute-1 sshd-session[72661]: Disconnected from invalid user user1 102.176.81.29 port 37900 [preauth]
Nov 23 20:39:30 compute-1 sshd-session[72663]: Invalid user server from 118.145.189.160 port 57656
Nov 23 20:39:30 compute-1 sshd-session[72663]: Received disconnect from 118.145.189.160 port 57656:11: Bye Bye [preauth]
Nov 23 20:39:30 compute-1 sshd-session[72663]: Disconnected from invalid user server 118.145.189.160 port 57656 [preauth]
Nov 23 20:39:41 compute-1 sshd-session[72665]: Received disconnect from 43.225.142.116 port 55692:11: Bye Bye [preauth]
Nov 23 20:39:41 compute-1 sshd-session[72665]: Disconnected from authenticating user root 43.225.142.116 port 55692 [preauth]
Nov 23 20:39:42 compute-1 sshd-session[72667]: Accepted publickey for ceph-admin from 192.168.122.100 port 58242 ssh2: RSA SHA256:ArvGVmp8+2uP4nDr4YVQ5KKtNyaQTjQGpGKaK12sPrI
Nov 23 20:39:42 compute-1 systemd[1]: Created slice User Slice of UID 42477.
Nov 23 20:39:42 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 23 20:39:42 compute-1 systemd-logind[793]: New session 20 of user ceph-admin.
Nov 23 20:39:42 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 23 20:39:43 compute-1 systemd[1]: Starting User Manager for UID 42477...
Nov 23 20:39:43 compute-1 systemd[72671]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 23 20:39:43 compute-1 systemd[72671]: Queued start job for default target Main User Target.
Nov 23 20:39:43 compute-1 systemd[72671]: Created slice User Application Slice.
Nov 23 20:39:43 compute-1 systemd[72671]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 23 20:39:43 compute-1 systemd[72671]: Started Daily Cleanup of User's Temporary Directories.
Nov 23 20:39:43 compute-1 systemd[72671]: Reached target Paths.
Nov 23 20:39:43 compute-1 systemd[72671]: Reached target Timers.
Nov 23 20:39:43 compute-1 systemd[72671]: Starting D-Bus User Message Bus Socket...
Nov 23 20:39:43 compute-1 systemd[72671]: Starting Create User's Volatile Files and Directories...
Nov 23 20:39:43 compute-1 sshd-session[72684]: Accepted publickey for ceph-admin from 192.168.122.100 port 58250 ssh2: RSA SHA256:ArvGVmp8+2uP4nDr4YVQ5KKtNyaQTjQGpGKaK12sPrI
Nov 23 20:39:43 compute-1 systemd[72671]: Finished Create User's Volatile Files and Directories.
Nov 23 20:39:43 compute-1 systemd[72671]: Listening on D-Bus User Message Bus Socket.
Nov 23 20:39:43 compute-1 systemd[72671]: Reached target Sockets.
Nov 23 20:39:43 compute-1 systemd[72671]: Reached target Basic System.
Nov 23 20:39:43 compute-1 systemd[72671]: Reached target Main User Target.
Nov 23 20:39:43 compute-1 systemd[72671]: Startup finished in 121ms.
Nov 23 20:39:43 compute-1 systemd[1]: Started User Manager for UID 42477.
Nov 23 20:39:43 compute-1 systemd[1]: Started Session 20 of User ceph-admin.
Nov 23 20:39:43 compute-1 sshd-session[72667]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 23 20:39:43 compute-1 systemd-logind[793]: New session 22 of user ceph-admin.
Nov 23 20:39:43 compute-1 systemd[1]: Started Session 22 of User ceph-admin.
Nov 23 20:39:43 compute-1 sshd-session[72684]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 23 20:39:43 compute-1 sudo[72691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:39:43 compute-1 sudo[72691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:39:43 compute-1 sudo[72691]: pam_unix(sudo:session): session closed for user root
Nov 23 20:39:43 compute-1 sshd-session[72716]: Accepted publickey for ceph-admin from 192.168.122.100 port 58254 ssh2: RSA SHA256:ArvGVmp8+2uP4nDr4YVQ5KKtNyaQTjQGpGKaK12sPrI
Nov 23 20:39:43 compute-1 systemd-logind[793]: New session 23 of user ceph-admin.
Nov 23 20:39:43 compute-1 systemd[1]: Started Session 23 of User ceph-admin.
Nov 23 20:39:43 compute-1 sshd-session[72716]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 23 20:39:43 compute-1 sudo[72720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-1
Nov 23 20:39:43 compute-1 sudo[72720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:39:43 compute-1 sudo[72720]: pam_unix(sudo:session): session closed for user root
Nov 23 20:39:43 compute-1 sshd-session[72745]: Accepted publickey for ceph-admin from 192.168.122.100 port 58262 ssh2: RSA SHA256:ArvGVmp8+2uP4nDr4YVQ5KKtNyaQTjQGpGKaK12sPrI
Nov 23 20:39:43 compute-1 systemd-logind[793]: New session 24 of user ceph-admin.
Nov 23 20:39:43 compute-1 systemd[1]: Started Session 24 of User ceph-admin.
Nov 23 20:39:43 compute-1 sshd-session[72745]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 23 20:39:43 compute-1 sudo[72749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Nov 23 20:39:43 compute-1 sudo[72749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:39:43 compute-1 sudo[72749]: pam_unix(sudo:session): session closed for user root
Nov 23 20:39:44 compute-1 sshd-session[72774]: Accepted publickey for ceph-admin from 192.168.122.100 port 58270 ssh2: RSA SHA256:ArvGVmp8+2uP4nDr4YVQ5KKtNyaQTjQGpGKaK12sPrI
Nov 23 20:39:44 compute-1 systemd-logind[793]: New session 25 of user ceph-admin.
Nov 23 20:39:44 compute-1 systemd[1]: Started Session 25 of User ceph-admin.
Nov 23 20:39:44 compute-1 sshd-session[72774]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 23 20:39:44 compute-1 sudo[72778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:39:44 compute-1 sudo[72778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:39:44 compute-1 sudo[72778]: pam_unix(sudo:session): session closed for user root
Nov 23 20:39:44 compute-1 sshd-session[72803]: Accepted publickey for ceph-admin from 192.168.122.100 port 58274 ssh2: RSA SHA256:ArvGVmp8+2uP4nDr4YVQ5KKtNyaQTjQGpGKaK12sPrI
Nov 23 20:39:44 compute-1 systemd-logind[793]: New session 26 of user ceph-admin.
Nov 23 20:39:44 compute-1 systemd[1]: Started Session 26 of User ceph-admin.
Nov 23 20:39:44 compute-1 sshd-session[72803]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 23 20:39:44 compute-1 sudo[72807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:39:44 compute-1 sudo[72807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:39:44 compute-1 sudo[72807]: pam_unix(sudo:session): session closed for user root
Nov 23 20:39:44 compute-1 sshd-session[72832]: Accepted publickey for ceph-admin from 192.168.122.100 port 56514 ssh2: RSA SHA256:ArvGVmp8+2uP4nDr4YVQ5KKtNyaQTjQGpGKaK12sPrI
Nov 23 20:39:44 compute-1 systemd-logind[793]: New session 27 of user ceph-admin.
Nov 23 20:39:44 compute-1 systemd[1]: Started Session 27 of User ceph-admin.
Nov 23 20:39:44 compute-1 sshd-session[72832]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 23 20:39:44 compute-1 sudo[72836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Nov 23 20:39:44 compute-1 sudo[72836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:39:44 compute-1 sudo[72836]: pam_unix(sudo:session): session closed for user root
Nov 23 20:39:45 compute-1 sshd-session[72861]: Accepted publickey for ceph-admin from 192.168.122.100 port 56520 ssh2: RSA SHA256:ArvGVmp8+2uP4nDr4YVQ5KKtNyaQTjQGpGKaK12sPrI
Nov 23 20:39:45 compute-1 systemd-logind[793]: New session 28 of user ceph-admin.
Nov 23 20:39:45 compute-1 systemd[1]: Started Session 28 of User ceph-admin.
Nov 23 20:39:45 compute-1 sshd-session[72861]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 23 20:39:45 compute-1 sudo[72865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:39:45 compute-1 sudo[72865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:39:45 compute-1 sudo[72865]: pam_unix(sudo:session): session closed for user root
Nov 23 20:39:45 compute-1 sshd-session[72890]: Accepted publickey for ceph-admin from 192.168.122.100 port 56528 ssh2: RSA SHA256:ArvGVmp8+2uP4nDr4YVQ5KKtNyaQTjQGpGKaK12sPrI
Nov 23 20:39:45 compute-1 systemd-logind[793]: New session 29 of user ceph-admin.
Nov 23 20:39:45 compute-1 systemd[1]: Started Session 29 of User ceph-admin.
Nov 23 20:39:45 compute-1 sshd-session[72890]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 23 20:39:45 compute-1 sudo[72894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Nov 23 20:39:45 compute-1 sudo[72894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:39:45 compute-1 sudo[72894]: pam_unix(sudo:session): session closed for user root
Nov 23 20:39:45 compute-1 sshd-session[72919]: Accepted publickey for ceph-admin from 192.168.122.100 port 56534 ssh2: RSA SHA256:ArvGVmp8+2uP4nDr4YVQ5KKtNyaQTjQGpGKaK12sPrI
Nov 23 20:39:45 compute-1 systemd-logind[793]: New session 30 of user ceph-admin.
Nov 23 20:39:45 compute-1 systemd[1]: Started Session 30 of User ceph-admin.
Nov 23 20:39:45 compute-1 sshd-session[72919]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 23 20:39:46 compute-1 sshd-session[72946]: Accepted publickey for ceph-admin from 192.168.122.100 port 56546 ssh2: RSA SHA256:ArvGVmp8+2uP4nDr4YVQ5KKtNyaQTjQGpGKaK12sPrI
Nov 23 20:39:46 compute-1 systemd-logind[793]: New session 31 of user ceph-admin.
Nov 23 20:39:47 compute-1 systemd[1]: Started Session 31 of User ceph-admin.
Nov 23 20:39:47 compute-1 sshd-session[72946]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 23 20:39:47 compute-1 sudo[72950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Nov 23 20:39:47 compute-1 sudo[72950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:39:47 compute-1 sudo[72950]: pam_unix(sudo:session): session closed for user root
Nov 23 20:39:47 compute-1 sshd-session[72975]: Accepted publickey for ceph-admin from 192.168.122.100 port 56552 ssh2: RSA SHA256:ArvGVmp8+2uP4nDr4YVQ5KKtNyaQTjQGpGKaK12sPrI
Nov 23 20:39:47 compute-1 systemd-logind[793]: New session 32 of user ceph-admin.
Nov 23 20:39:47 compute-1 systemd[1]: Started Session 32 of User ceph-admin.
Nov 23 20:39:47 compute-1 sshd-session[72975]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 23 20:39:47 compute-1 sudo[72979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-1
Nov 23 20:39:47 compute-1 sudo[72979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:39:47 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 20:39:47 compute-1 sudo[72979]: pam_unix(sudo:session): session closed for user root
Nov 23 20:39:47 compute-1 sudo[73024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:39:47 compute-1 sudo[73024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:39:47 compute-1 sudo[73024]: pam_unix(sudo:session): session closed for user root
Nov 23 20:39:47 compute-1 sudo[73049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Nov 23 20:39:47 compute-1 sudo[73049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:39:48 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 20:39:48 compute-1 sudo[73049]: pam_unix(sudo:session): session closed for user root
Nov 23 20:39:48 compute-1 sudo[73094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:39:48 compute-1 sudo[73094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:39:48 compute-1 sudo[73094]: pam_unix(sudo:session): session closed for user root
Nov 23 20:39:48 compute-1 sudo[73119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 23 20:39:48 compute-1 sudo[73119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:39:48 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 20:39:48 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 20:39:48 compute-1 sudo[73119]: pam_unix(sudo:session): session closed for user root
Nov 23 20:39:48 compute-1 sudo[73182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:39:48 compute-1 sudo[73182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:39:48 compute-1 sudo[73182]: pam_unix(sudo:session): session closed for user root
Nov 23 20:39:48 compute-1 sudo[73207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 20:39:48 compute-1 sudo[73207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:39:49 compute-1 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 73245 (sysctl)
Nov 23 20:39:49 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 20:39:49 compute-1 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 23 20:39:49 compute-1 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 23 20:39:49 compute-1 sudo[73207]: pam_unix(sudo:session): session closed for user root
Nov 23 20:39:50 compute-1 sudo[73267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:39:50 compute-1 sudo[73267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:39:50 compute-1 sudo[73267]: pam_unix(sudo:session): session closed for user root
Nov 23 20:39:50 compute-1 sudo[73292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Nov 23 20:39:50 compute-1 sudo[73292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:39:50 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 20:39:50 compute-1 sudo[73292]: pam_unix(sudo:session): session closed for user root
Nov 23 20:39:50 compute-1 sudo[73336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:39:50 compute-1 sudo[73336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:39:50 compute-1 sudo[73336]: pam_unix(sudo:session): session closed for user root
Nov 23 20:39:50 compute-1 sudo[73361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -- inventory --format=json-pretty --filter-for-batch
Nov 23 20:39:50 compute-1 sudo[73361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:39:50 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 20:39:51 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 20:39:53 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat3555228885-lower\x2dmapped.mount: Deactivated successfully.
Nov 23 20:40:11 compute-1 podman[73422]: 2025-11-23 20:40:11.472716224 +0000 UTC m=+20.399341405 container create e9bd2a1d5f06d9cc774a342e492c60e75f4c52d1d15afcd869cf762a22a8ad60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_hodgkin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 23 20:40:11 compute-1 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 23 20:40:11 compute-1 systemd[1]: Started libpod-conmon-e9bd2a1d5f06d9cc774a342e492c60e75f4c52d1d15afcd869cf762a22a8ad60.scope.
Nov 23 20:40:11 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:40:11 compute-1 podman[73422]: 2025-11-23 20:40:11.457017035 +0000 UTC m=+20.383642236 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:40:11 compute-1 podman[73422]: 2025-11-23 20:40:11.572737384 +0000 UTC m=+20.499362585 container init e9bd2a1d5f06d9cc774a342e492c60e75f4c52d1d15afcd869cf762a22a8ad60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 20:40:11 compute-1 podman[73422]: 2025-11-23 20:40:11.579764202 +0000 UTC m=+20.506389383 container start e9bd2a1d5f06d9cc774a342e492c60e75f4c52d1d15afcd869cf762a22a8ad60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_hodgkin, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 20:40:11 compute-1 practical_hodgkin[73488]: 167 167
Nov 23 20:40:11 compute-1 systemd[1]: libpod-e9bd2a1d5f06d9cc774a342e492c60e75f4c52d1d15afcd869cf762a22a8ad60.scope: Deactivated successfully.
Nov 23 20:40:11 compute-1 conmon[73488]: conmon e9bd2a1d5f06d9cc774a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e9bd2a1d5f06d9cc774a342e492c60e75f4c52d1d15afcd869cf762a22a8ad60.scope/container/memory.events
Nov 23 20:40:11 compute-1 podman[73422]: 2025-11-23 20:40:11.58682564 +0000 UTC m=+20.513450821 container attach e9bd2a1d5f06d9cc774a342e492c60e75f4c52d1d15afcd869cf762a22a8ad60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_hodgkin, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 20:40:11 compute-1 podman[73422]: 2025-11-23 20:40:11.587673843 +0000 UTC m=+20.514299024 container died e9bd2a1d5f06d9cc774a342e492c60e75f4c52d1d15afcd869cf762a22a8ad60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_hodgkin, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 20:40:11 compute-1 systemd[1]: var-lib-containers-storage-overlay-d85d9378ef6de38413897711c5ab6ee1a8b2ad895100c4fa60964ce7b1213882-merged.mount: Deactivated successfully.
Nov 23 20:40:11 compute-1 podman[73422]: 2025-11-23 20:40:11.637594285 +0000 UTC m=+20.564219466 container remove e9bd2a1d5f06d9cc774a342e492c60e75f4c52d1d15afcd869cf762a22a8ad60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 20:40:11 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 20:40:11 compute-1 systemd[1]: libpod-conmon-e9bd2a1d5f06d9cc774a342e492c60e75f4c52d1d15afcd869cf762a22a8ad60.scope: Deactivated successfully.
Nov 23 20:40:11 compute-1 podman[73513]: 2025-11-23 20:40:11.764333368 +0000 UTC m=+0.020077328 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:40:11 compute-1 podman[73513]: 2025-11-23 20:40:11.982734397 +0000 UTC m=+0.238478327 container create e85974135ca9020bf55c9e58d5888f46504a4f0cfc6bf69b23368faf886bd81b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_colden, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 20:40:14 compute-1 systemd[1]: Started libpod-conmon-e85974135ca9020bf55c9e58d5888f46504a4f0cfc6bf69b23368faf886bd81b.scope.
Nov 23 20:40:14 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:40:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f786460431bb502e9d36243a61847bf05f213d39ea7ca231331cbd9bafda844a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f786460431bb502e9d36243a61847bf05f213d39ea7ca231331cbd9bafda844a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:14 compute-1 podman[73513]: 2025-11-23 20:40:14.2599657 +0000 UTC m=+2.515709650 container init e85974135ca9020bf55c9e58d5888f46504a4f0cfc6bf69b23368faf886bd81b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 23 20:40:14 compute-1 podman[73513]: 2025-11-23 20:40:14.265462608 +0000 UTC m=+2.521206538 container start e85974135ca9020bf55c9e58d5888f46504a4f0cfc6bf69b23368faf886bd81b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_colden, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 20:40:14 compute-1 podman[73513]: 2025-11-23 20:40:14.274854979 +0000 UTC m=+2.530598919 container attach e85974135ca9020bf55c9e58d5888f46504a4f0cfc6bf69b23368faf886bd81b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid)
Nov 23 20:40:14 compute-1 friendly_colden[73532]: [
Nov 23 20:40:14 compute-1 friendly_colden[73532]:     {
Nov 23 20:40:14 compute-1 friendly_colden[73532]:         "available": false,
Nov 23 20:40:14 compute-1 friendly_colden[73532]:         "being_replaced": false,
Nov 23 20:40:14 compute-1 friendly_colden[73532]:         "ceph_device_lvm": false,
Nov 23 20:40:14 compute-1 friendly_colden[73532]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 23 20:40:14 compute-1 friendly_colden[73532]:         "lsm_data": {},
Nov 23 20:40:14 compute-1 friendly_colden[73532]:         "lvs": [],
Nov 23 20:40:14 compute-1 friendly_colden[73532]:         "path": "/dev/sr0",
Nov 23 20:40:14 compute-1 friendly_colden[73532]:         "rejected_reasons": [
Nov 23 20:40:14 compute-1 friendly_colden[73532]:             "Has a FileSystem",
Nov 23 20:40:14 compute-1 friendly_colden[73532]:             "Insufficient space (<5GB)"
Nov 23 20:40:14 compute-1 friendly_colden[73532]:         ],
Nov 23 20:40:14 compute-1 friendly_colden[73532]:         "sys_api": {
Nov 23 20:40:14 compute-1 friendly_colden[73532]:             "actuators": null,
Nov 23 20:40:14 compute-1 friendly_colden[73532]:             "device_nodes": [
Nov 23 20:40:14 compute-1 friendly_colden[73532]:                 "sr0"
Nov 23 20:40:14 compute-1 friendly_colden[73532]:             ],
Nov 23 20:40:14 compute-1 friendly_colden[73532]:             "devname": "sr0",
Nov 23 20:40:14 compute-1 friendly_colden[73532]:             "human_readable_size": "482.00 KB",
Nov 23 20:40:14 compute-1 friendly_colden[73532]:             "id_bus": "ata",
Nov 23 20:40:14 compute-1 friendly_colden[73532]:             "model": "QEMU DVD-ROM",
Nov 23 20:40:14 compute-1 friendly_colden[73532]:             "nr_requests": "2",
Nov 23 20:40:14 compute-1 friendly_colden[73532]:             "parent": "/dev/sr0",
Nov 23 20:40:14 compute-1 friendly_colden[73532]:             "partitions": {},
Nov 23 20:40:14 compute-1 friendly_colden[73532]:             "path": "/dev/sr0",
Nov 23 20:40:14 compute-1 friendly_colden[73532]:             "removable": "1",
Nov 23 20:40:14 compute-1 friendly_colden[73532]:             "rev": "2.5+",
Nov 23 20:40:14 compute-1 friendly_colden[73532]:             "ro": "0",
Nov 23 20:40:14 compute-1 friendly_colden[73532]:             "rotational": "1",
Nov 23 20:40:14 compute-1 friendly_colden[73532]:             "sas_address": "",
Nov 23 20:40:14 compute-1 friendly_colden[73532]:             "sas_device_handle": "",
Nov 23 20:40:14 compute-1 friendly_colden[73532]:             "scheduler_mode": "mq-deadline",
Nov 23 20:40:14 compute-1 friendly_colden[73532]:             "sectors": 0,
Nov 23 20:40:14 compute-1 friendly_colden[73532]:             "sectorsize": "2048",
Nov 23 20:40:14 compute-1 friendly_colden[73532]:             "size": 493568.0,
Nov 23 20:40:14 compute-1 friendly_colden[73532]:             "support_discard": "2048",
Nov 23 20:40:14 compute-1 friendly_colden[73532]:             "type": "disk",
Nov 23 20:40:14 compute-1 friendly_colden[73532]:             "vendor": "QEMU"
Nov 23 20:40:14 compute-1 friendly_colden[73532]:         }
Nov 23 20:40:14 compute-1 friendly_colden[73532]:     }
Nov 23 20:40:14 compute-1 friendly_colden[73532]: ]
Nov 23 20:40:14 compute-1 systemd[1]: libpod-e85974135ca9020bf55c9e58d5888f46504a4f0cfc6bf69b23368faf886bd81b.scope: Deactivated successfully.
Nov 23 20:40:14 compute-1 podman[73513]: 2025-11-23 20:40:14.945974048 +0000 UTC m=+3.201718048 container died e85974135ca9020bf55c9e58d5888f46504a4f0cfc6bf69b23368faf886bd81b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 20:40:15 compute-1 systemd[1]: var-lib-containers-storage-overlay-f786460431bb502e9d36243a61847bf05f213d39ea7ca231331cbd9bafda844a-merged.mount: Deactivated successfully.
Nov 23 20:40:15 compute-1 podman[73513]: 2025-11-23 20:40:15.317852513 +0000 UTC m=+3.573596443 container remove e85974135ca9020bf55c9e58d5888f46504a4f0cfc6bf69b23368faf886bd81b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_colden, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 20:40:15 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 20:40:15 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 20:40:15 compute-1 systemd[1]: libpod-conmon-e85974135ca9020bf55c9e58d5888f46504a4f0cfc6bf69b23368faf886bd81b.scope: Deactivated successfully.
Nov 23 20:40:15 compute-1 sudo[73361]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:15 compute-1 sudo[74418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 23 20:40:15 compute-1 sudo[74418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:15 compute-1 sudo[74418]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:15 compute-1 sudo[74443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph
Nov 23 20:40:15 compute-1 sudo[74443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:15 compute-1 sudo[74443]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:15 compute-1 sudo[74468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.conf.new
Nov 23 20:40:15 compute-1 sudo[74468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:15 compute-1 sudo[74468]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:15 compute-1 sudo[74493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:40:15 compute-1 sudo[74493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:15 compute-1 sudo[74493]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:15 compute-1 sudo[74518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.conf.new
Nov 23 20:40:15 compute-1 sudo[74518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:15 compute-1 sudo[74518]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:15 compute-1 sudo[74566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.conf.new
Nov 23 20:40:15 compute-1 sudo[74566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:15 compute-1 sudo[74566]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:16 compute-1 sudo[74591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.conf.new
Nov 23 20:40:16 compute-1 sudo[74591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:16 compute-1 sudo[74591]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:16 compute-1 sudo[74616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 23 20:40:16 compute-1 sudo[74616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:16 compute-1 sudo[74616]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:16 compute-1 sudo[74641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config
Nov 23 20:40:16 compute-1 sudo[74641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:16 compute-1 sudo[74641]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:16 compute-1 sudo[74666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config
Nov 23 20:40:16 compute-1 sudo[74666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:16 compute-1 sudo[74666]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:16 compute-1 sudo[74691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf.new
Nov 23 20:40:16 compute-1 sudo[74691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:16 compute-1 sudo[74691]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:16 compute-1 sudo[74716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:40:16 compute-1 sudo[74716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:16 compute-1 sudo[74716]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:16 compute-1 sudo[74741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf.new
Nov 23 20:40:16 compute-1 sudo[74741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:16 compute-1 sudo[74741]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:16 compute-1 sudo[74789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf.new
Nov 23 20:40:16 compute-1 sudo[74789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:16 compute-1 sudo[74789]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:16 compute-1 sudo[74814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf.new
Nov 23 20:40:16 compute-1 sudo[74814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:16 compute-1 sudo[74814]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:16 compute-1 sudo[74839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf.new /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 20:40:16 compute-1 sudo[74839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:16 compute-1 sudo[74839]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:16 compute-1 sudo[74864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 23 20:40:16 compute-1 sudo[74864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:16 compute-1 sudo[74864]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:16 compute-1 sudo[74889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph
Nov 23 20:40:16 compute-1 sudo[74889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:16 compute-1 sudo[74889]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:16 compute-1 sudo[74914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.client.admin.keyring.new
Nov 23 20:40:16 compute-1 sudo[74914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:16 compute-1 sudo[74914]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:16 compute-1 sudo[74939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:40:16 compute-1 sudo[74939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:16 compute-1 sudo[74939]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:16 compute-1 sudo[74964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.client.admin.keyring.new
Nov 23 20:40:16 compute-1 sudo[74964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:16 compute-1 sudo[74964]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:16 compute-1 sudo[75012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.client.admin.keyring.new
Nov 23 20:40:16 compute-1 sudo[75012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:16 compute-1 sudo[75012]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:17 compute-1 sudo[75037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.client.admin.keyring.new
Nov 23 20:40:17 compute-1 sudo[75037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:17 compute-1 sudo[75037]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:17 compute-1 sudo[75062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 23 20:40:17 compute-1 sudo[75062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:17 compute-1 sudo[75062]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:17 compute-1 sudo[75087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config
Nov 23 20:40:17 compute-1 sudo[75087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:17 compute-1 sudo[75087]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:17 compute-1 sudo[75112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config
Nov 23 20:40:17 compute-1 sudo[75112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:17 compute-1 sudo[75112]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:17 compute-1 sudo[75137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring.new
Nov 23 20:40:17 compute-1 sudo[75137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:17 compute-1 sudo[75137]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:17 compute-1 sudo[75162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:40:17 compute-1 sudo[75162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:17 compute-1 sudo[75162]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:17 compute-1 sudo[75187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring.new
Nov 23 20:40:17 compute-1 sudo[75187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:17 compute-1 sudo[75187]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:17 compute-1 sudo[75235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring.new
Nov 23 20:40:17 compute-1 sudo[75235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:17 compute-1 sudo[75235]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:17 compute-1 sudo[75260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring.new
Nov 23 20:40:17 compute-1 sudo[75260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:17 compute-1 sudo[75260]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:17 compute-1 sudo[75285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring.new /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 20:40:17 compute-1 sudo[75285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:17 compute-1 sudo[75285]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:17 compute-1 sudo[75310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:40:17 compute-1 sudo[75310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:17 compute-1 sudo[75310]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:17 compute-1 sudo[75335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:40:17 compute-1 sudo[75335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:18 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 20:40:18 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 20:40:18 compute-1 podman[75399]: 2025-11-23 20:40:18.192076019 +0000 UTC m=+0.042045181 container create 835d471630931944b7f9bab88b5da2fbefc6a2de406072d30b4de3539d648391 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_solomon, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 20:40:18 compute-1 systemd[1]: Started libpod-conmon-835d471630931944b7f9bab88b5da2fbefc6a2de406072d30b4de3539d648391.scope.
Nov 23 20:40:18 compute-1 podman[75399]: 2025-11-23 20:40:18.172698272 +0000 UTC m=+0.022667474 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:40:18 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:40:18 compute-1 podman[75399]: 2025-11-23 20:40:18.414630589 +0000 UTC m=+0.264599761 container init 835d471630931944b7f9bab88b5da2fbefc6a2de406072d30b4de3539d648391 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 20:40:18 compute-1 podman[75399]: 2025-11-23 20:40:18.4226567 +0000 UTC m=+0.272625862 container start 835d471630931944b7f9bab88b5da2fbefc6a2de406072d30b4de3539d648391 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_solomon, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 20:40:18 compute-1 podman[75399]: 2025-11-23 20:40:18.427643823 +0000 UTC m=+0.277613005 container attach 835d471630931944b7f9bab88b5da2fbefc6a2de406072d30b4de3539d648391 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_solomon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 23 20:40:18 compute-1 competent_solomon[75416]: 167 167
Nov 23 20:40:18 compute-1 systemd[1]: libpod-835d471630931944b7f9bab88b5da2fbefc6a2de406072d30b4de3539d648391.scope: Deactivated successfully.
Nov 23 20:40:18 compute-1 podman[75399]: 2025-11-23 20:40:18.429838206 +0000 UTC m=+0.279807368 container died 835d471630931944b7f9bab88b5da2fbefc6a2de406072d30b4de3539d648391 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 20:40:18 compute-1 podman[75399]: 2025-11-23 20:40:18.571331375 +0000 UTC m=+0.421300537 container remove 835d471630931944b7f9bab88b5da2fbefc6a2de406072d30b4de3539d648391 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_solomon, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 20:40:18 compute-1 systemd[1]: libpod-conmon-835d471630931944b7f9bab88b5da2fbefc6a2de406072d30b4de3539d648391.scope: Deactivated successfully.
Nov 23 20:40:18 compute-1 systemd[1]: Reloading.
Nov 23 20:40:18 compute-1 systemd-rc-local-generator[75462]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:40:18 compute-1 systemd-sysv-generator[75465]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:40:18 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 20:40:19 compute-1 systemd[1]: Reloading.
Nov 23 20:40:19 compute-1 systemd-rc-local-generator[75498]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:40:19 compute-1 systemd-sysv-generator[75502]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:40:19 compute-1 systemd[1]: Reached target All Ceph clusters and services.
Nov 23 20:40:19 compute-1 systemd[1]: Reloading.
Nov 23 20:40:19 compute-1 systemd-rc-local-generator[75536]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:40:19 compute-1 systemd-sysv-generator[75539]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:40:19 compute-1 systemd[1]: Reached target Ceph cluster 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 20:40:19 compute-1 systemd[1]: Reloading.
Nov 23 20:40:19 compute-1 systemd-rc-local-generator[75574]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:40:19 compute-1 systemd-sysv-generator[75578]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:40:19 compute-1 systemd[1]: Reloading.
Nov 23 20:40:19 compute-1 systemd-sysv-generator[75617]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:40:19 compute-1 systemd-rc-local-generator[75613]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:40:19 compute-1 systemd[1]: Created slice Slice /system/ceph-03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 20:40:19 compute-1 systemd[1]: Reached target System Time Set.
Nov 23 20:40:19 compute-1 systemd[1]: Reached target System Time Synchronized.
Nov 23 20:40:19 compute-1 systemd[1]: Starting Ceph crash.compute-1 for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 20:40:20 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 20:40:20 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 20:40:20 compute-1 podman[75671]: 2025-11-23 20:40:20.214765697 +0000 UTC m=+0.036587043 container create e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True)
Nov 23 20:40:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b95c3c65c5acefb31fd3da1240eb22d79c78a1838c3103e9fbae9f77b2cc08/merged/etc/ceph/ceph.client.crash.compute-1.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b95c3c65c5acefb31fd3da1240eb22d79c78a1838c3103e9fbae9f77b2cc08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b95c3c65c5acefb31fd3da1240eb22d79c78a1838c3103e9fbae9f77b2cc08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:20 compute-1 podman[75671]: 2025-11-23 20:40:20.26494338 +0000 UTC m=+0.086764746 container init e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 23 20:40:20 compute-1 podman[75671]: 2025-11-23 20:40:20.269919903 +0000 UTC m=+0.091741249 container start e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Nov 23 20:40:20 compute-1 bash[75671]: e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008
Nov 23 20:40:20 compute-1 podman[75671]: 2025-11-23 20:40:20.196070519 +0000 UTC m=+0.017891885 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:40:20 compute-1 systemd[1]: Started Ceph crash.compute-1 for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 20:40:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1[75686]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 23 20:40:20 compute-1 sudo[75335]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1[75686]: 2025-11-23T20:40:20.407+0000 7fcd249a7640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 23 20:40:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1[75686]: 2025-11-23T20:40:20.407+0000 7fcd249a7640 -1 AuthRegistry(0x7fcd1c0698f0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 23 20:40:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1[75686]: 2025-11-23T20:40:20.408+0000 7fcd249a7640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 23 20:40:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1[75686]: 2025-11-23T20:40:20.408+0000 7fcd249a7640 -1 AuthRegistry(0x7fcd249a5ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 23 20:40:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1[75686]: 2025-11-23T20:40:20.409+0000 7fcd2271c640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 23 20:40:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1[75686]: 2025-11-23T20:40:20.409+0000 7fcd249a7640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 23 20:40:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1[75686]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 23 20:40:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1[75686]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 23 20:40:20 compute-1 sudo[75693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:40:20 compute-1 sudo[75693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:20 compute-1 sudo[75693]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:20 compute-1 sudo[75728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 23 20:40:20 compute-1 sudo[75728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:20 compute-1 podman[75792]: 2025-11-23 20:40:20.842229841 +0000 UTC m=+0.038139488 container create 8f5eac7277f3d9284b82f4f7f675e851f9544acf9047f4672bcfb3bbfa186100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_brattain, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 20:40:20 compute-1 systemd[1]: Started libpod-conmon-8f5eac7277f3d9284b82f4f7f675e851f9544acf9047f4672bcfb3bbfa186100.scope.
Nov 23 20:40:20 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:40:20 compute-1 podman[75792]: 2025-11-23 20:40:20.825487289 +0000 UTC m=+0.021396966 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:40:20 compute-1 podman[75792]: 2025-11-23 20:40:20.926822964 +0000 UTC m=+0.122732641 container init 8f5eac7277f3d9284b82f4f7f675e851f9544acf9047f4672bcfb3bbfa186100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_brattain, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 23 20:40:20 compute-1 podman[75792]: 2025-11-23 20:40:20.935568765 +0000 UTC m=+0.131478402 container start 8f5eac7277f3d9284b82f4f7f675e851f9544acf9047f4672bcfb3bbfa186100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_brattain, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 20:40:20 compute-1 podman[75792]: 2025-11-23 20:40:20.939653953 +0000 UTC m=+0.135563650 container attach 8f5eac7277f3d9284b82f4f7f675e851f9544acf9047f4672bcfb3bbfa186100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_brattain, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 20:40:20 compute-1 quizzical_brattain[75808]: 167 167
Nov 23 20:40:20 compute-1 systemd[1]: libpod-8f5eac7277f3d9284b82f4f7f675e851f9544acf9047f4672bcfb3bbfa186100.scope: Deactivated successfully.
Nov 23 20:40:20 compute-1 podman[75792]: 2025-11-23 20:40:20.942716071 +0000 UTC m=+0.138625728 container died 8f5eac7277f3d9284b82f4f7f675e851f9544acf9047f4672bcfb3bbfa186100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_brattain, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 20:40:21 compute-1 systemd[1]: var-lib-containers-storage-overlay-841acbcb219eb790486c9d18f23e56dc89726b7081af75c6be34bac8e39c9a68-merged.mount: Deactivated successfully.
Nov 23 20:40:21 compute-1 podman[75792]: 2025-11-23 20:40:21.081838341 +0000 UTC m=+0.277747998 container remove 8f5eac7277f3d9284b82f4f7f675e851f9544acf9047f4672bcfb3bbfa186100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 23 20:40:21 compute-1 systemd[1]: libpod-conmon-8f5eac7277f3d9284b82f4f7f675e851f9544acf9047f4672bcfb3bbfa186100.scope: Deactivated successfully.
Nov 23 20:40:21 compute-1 podman[75831]: 2025-11-23 20:40:21.268051926 +0000 UTC m=+0.047682452 container create dba01039bbca5f83b306f3e6f65f6688b3441a2056a3bd0842f7b4ebce97b2a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 23 20:40:21 compute-1 systemd[1]: Started libpod-conmon-dba01039bbca5f83b306f3e6f65f6688b3441a2056a3bd0842f7b4ebce97b2a4.scope.
Nov 23 20:40:21 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:40:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d19f858e30e97909a553f27a18d198ce7376239986c3a5a8495e2fe107bbde5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d19f858e30e97909a553f27a18d198ce7376239986c3a5a8495e2fe107bbde5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:21 compute-1 podman[75831]: 2025-11-23 20:40:21.242689577 +0000 UTC m=+0.022320123 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:40:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d19f858e30e97909a553f27a18d198ce7376239986c3a5a8495e2fe107bbde5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d19f858e30e97909a553f27a18d198ce7376239986c3a5a8495e2fe107bbde5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d19f858e30e97909a553f27a18d198ce7376239986c3a5a8495e2fe107bbde5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:21 compute-1 podman[75831]: 2025-11-23 20:40:21.347054349 +0000 UTC m=+0.126684915 container init dba01039bbca5f83b306f3e6f65f6688b3441a2056a3bd0842f7b4ebce97b2a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mahavira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 23 20:40:21 compute-1 podman[75831]: 2025-11-23 20:40:21.358497148 +0000 UTC m=+0.138127664 container start dba01039bbca5f83b306f3e6f65f6688b3441a2056a3bd0842f7b4ebce97b2a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mahavira, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 20:40:21 compute-1 podman[75831]: 2025-11-23 20:40:21.361992499 +0000 UTC m=+0.141623045 container attach dba01039bbca5f83b306f3e6f65f6688b3441a2056a3bd0842f7b4ebce97b2a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mahavira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 20:40:21 compute-1 affectionate_mahavira[75847]: --> passed data devices: 0 physical, 1 LVM
Nov 23 20:40:21 compute-1 affectionate_mahavira[75847]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 20:40:21 compute-1 affectionate_mahavira[75847]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 20:40:21 compute-1 affectionate_mahavira[75847]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f9775703-f092-47d3-b1e4-23e694631322
Nov 23 20:40:22 compute-1 affectionate_mahavira[75847]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Nov 23 20:40:22 compute-1 affectionate_mahavira[75847]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 23 20:40:22 compute-1 lvm[75908]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 20:40:22 compute-1 lvm[75908]: VG ceph_vg0 finished
Nov 23 20:40:22 compute-1 affectionate_mahavira[75847]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 23 20:40:22 compute-1 affectionate_mahavira[75847]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 23 20:40:22 compute-1 affectionate_mahavira[75847]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Nov 23 20:40:22 compute-1 affectionate_mahavira[75847]:  stderr: got monmap epoch 1
Nov 23 20:40:22 compute-1 affectionate_mahavira[75847]: --> Creating keyring file for osd.0
Nov 23 20:40:22 compute-1 affectionate_mahavira[75847]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Nov 23 20:40:22 compute-1 affectionate_mahavira[75847]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Nov 23 20:40:22 compute-1 affectionate_mahavira[75847]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid f9775703-f092-47d3-b1e4-23e694631322 --setuser ceph --setgroup ceph
Nov 23 20:40:26 compute-1 sshd-session[76539]: Invalid user update from 34.91.0.68 port 38202
Nov 23 20:40:26 compute-1 sshd-session[76539]: Received disconnect from 34.91.0.68 port 38202:11: Bye Bye [preauth]
Nov 23 20:40:26 compute-1 sshd-session[76539]: Disconnected from invalid user update 34.91.0.68 port 38202 [preauth]
Nov 23 20:40:26 compute-1 affectionate_mahavira[75847]:  stderr: 2025-11-23T20:40:22.964+0000 7ff9bc9eb740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Nov 23 20:40:26 compute-1 affectionate_mahavira[75847]:  stderr: 2025-11-23T20:40:23.233+0000 7ff9bc9eb740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Nov 23 20:40:26 compute-1 affectionate_mahavira[75847]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 23 20:40:26 compute-1 affectionate_mahavira[75847]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 23 20:40:26 compute-1 affectionate_mahavira[75847]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 23 20:40:26 compute-1 affectionate_mahavira[75847]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 23 20:40:26 compute-1 affectionate_mahavira[75847]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 23 20:40:26 compute-1 affectionate_mahavira[75847]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 23 20:40:26 compute-1 affectionate_mahavira[75847]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 23 20:40:26 compute-1 affectionate_mahavira[75847]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 23 20:40:26 compute-1 affectionate_mahavira[75847]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 23 20:40:26 compute-1 systemd[1]: libpod-dba01039bbca5f83b306f3e6f65f6688b3441a2056a3bd0842f7b4ebce97b2a4.scope: Deactivated successfully.
Nov 23 20:40:26 compute-1 systemd[1]: libpod-dba01039bbca5f83b306f3e6f65f6688b3441a2056a3bd0842f7b4ebce97b2a4.scope: Consumed 1.937s CPU time.
Nov 23 20:40:26 compute-1 podman[76832]: 2025-11-23 20:40:26.943134139 +0000 UTC m=+0.024596929 container died dba01039bbca5f83b306f3e6f65f6688b3441a2056a3bd0842f7b4ebce97b2a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mahavira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 20:40:26 compute-1 systemd[1]: var-lib-containers-storage-overlay-7d19f858e30e97909a553f27a18d198ce7376239986c3a5a8495e2fe107bbde5-merged.mount: Deactivated successfully.
Nov 23 20:40:27 compute-1 podman[76832]: 2025-11-23 20:40:27.010799505 +0000 UTC m=+0.092262285 container remove dba01039bbca5f83b306f3e6f65f6688b3441a2056a3bd0842f7b4ebce97b2a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mahavira, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1)
Nov 23 20:40:27 compute-1 systemd[1]: libpod-conmon-dba01039bbca5f83b306f3e6f65f6688b3441a2056a3bd0842f7b4ebce97b2a4.scope: Deactivated successfully.
Nov 23 20:40:27 compute-1 sudo[75728]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:27 compute-1 sudo[76848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:40:27 compute-1 sudo[76848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:27 compute-1 sudo[76848]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:27 compute-1 sudo[76873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -- lvm list --format json
Nov 23 20:40:27 compute-1 sudo[76873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:27 compute-1 podman[76938]: 2025-11-23 20:40:27.586387797 +0000 UTC m=+0.057963247 container create d38c0eb54a98b27d67e04df885cdcf78034dd0999650a4c0288e78d85bfa3f05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_burnell, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 20:40:27 compute-1 systemd[1]: Started libpod-conmon-d38c0eb54a98b27d67e04df885cdcf78034dd0999650a4c0288e78d85bfa3f05.scope.
Nov 23 20:40:27 compute-1 podman[76938]: 2025-11-23 20:40:27.557193627 +0000 UTC m=+0.028769157 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:40:27 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:40:27 compute-1 podman[76938]: 2025-11-23 20:40:27.676149418 +0000 UTC m=+0.147724868 container init d38c0eb54a98b27d67e04df885cdcf78034dd0999650a4c0288e78d85bfa3f05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 23 20:40:27 compute-1 podman[76938]: 2025-11-23 20:40:27.684063836 +0000 UTC m=+0.155639286 container start d38c0eb54a98b27d67e04df885cdcf78034dd0999650a4c0288e78d85bfa3f05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_burnell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Nov 23 20:40:27 compute-1 infallible_burnell[76954]: 167 167
Nov 23 20:40:27 compute-1 podman[76938]: 2025-11-23 20:40:27.689698828 +0000 UTC m=+0.161274308 container attach d38c0eb54a98b27d67e04df885cdcf78034dd0999650a4c0288e78d85bfa3f05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 20:40:27 compute-1 systemd[1]: libpod-d38c0eb54a98b27d67e04df885cdcf78034dd0999650a4c0288e78d85bfa3f05.scope: Deactivated successfully.
Nov 23 20:40:27 compute-1 podman[76938]: 2025-11-23 20:40:27.69077637 +0000 UTC m=+0.162351820 container died d38c0eb54a98b27d67e04df885cdcf78034dd0999650a4c0288e78d85bfa3f05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 20:40:27 compute-1 systemd[1]: var-lib-containers-storage-overlay-a7d6d9e608b5ab2711a2ea652747770a7202c622129a5594dd6d701780735ff1-merged.mount: Deactivated successfully.
Nov 23 20:40:27 compute-1 podman[76938]: 2025-11-23 20:40:27.731077818 +0000 UTC m=+0.202653268 container remove d38c0eb54a98b27d67e04df885cdcf78034dd0999650a4c0288e78d85bfa3f05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Nov 23 20:40:27 compute-1 systemd[1]: libpod-conmon-d38c0eb54a98b27d67e04df885cdcf78034dd0999650a4c0288e78d85bfa3f05.scope: Deactivated successfully.
Nov 23 20:40:27 compute-1 podman[76977]: 2025-11-23 20:40:27.885635913 +0000 UTC m=+0.048939298 container create 447f99d2af739880bbbdced1fce3ac57387decd2167435e8748dd6ad4027344b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Nov 23 20:40:27 compute-1 systemd[1]: Started libpod-conmon-447f99d2af739880bbbdced1fce3ac57387decd2167435e8748dd6ad4027344b.scope.
Nov 23 20:40:27 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:40:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7321642969a322c229c3dd76362bb360881a65d749dcf88d0648864130c0ac1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7321642969a322c229c3dd76362bb360881a65d749dcf88d0648864130c0ac1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7321642969a322c229c3dd76362bb360881a65d749dcf88d0648864130c0ac1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7321642969a322c229c3dd76362bb360881a65d749dcf88d0648864130c0ac1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:27 compute-1 podman[76977]: 2025-11-23 20:40:27.861916491 +0000 UTC m=+0.025219926 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:40:27 compute-1 podman[76977]: 2025-11-23 20:40:27.962527695 +0000 UTC m=+0.125831100 container init 447f99d2af739880bbbdced1fce3ac57387decd2167435e8748dd6ad4027344b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_aryabhata, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 20:40:27 compute-1 podman[76977]: 2025-11-23 20:40:27.969826594 +0000 UTC m=+0.133129999 container start 447f99d2af739880bbbdced1fce3ac57387decd2167435e8748dd6ad4027344b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 23 20:40:27 compute-1 podman[76977]: 2025-11-23 20:40:27.974322603 +0000 UTC m=+0.137625988 container attach 447f99d2af739880bbbdced1fce3ac57387decd2167435e8748dd6ad4027344b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]: {
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:     "0": [
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:         {
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:             "devices": [
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:                 "/dev/loop3"
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:             ],
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:             "lv_name": "ceph_lv0",
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:             "lv_size": "21470642176",
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ioaAYR-gFzA-A11a-ddiv-8k6F-N5qc-RuEB9j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f9775703-f092-47d3-b1e4-23e694631322,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:             "lv_uuid": "ioaAYR-gFzA-A11a-ddiv-8k6F-N5qc-RuEB9j",
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:             "name": "ceph_lv0",
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:             "tags": {
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:                 "ceph.block_uuid": "ioaAYR-gFzA-A11a-ddiv-8k6F-N5qc-RuEB9j",
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:                 "ceph.cephx_lockbox_secret": "",
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:                 "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:                 "ceph.cluster_name": "ceph",
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:                 "ceph.crush_device_class": "",
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:                 "ceph.encrypted": "0",
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:                 "ceph.osd_fsid": "f9775703-f092-47d3-b1e4-23e694631322",
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:                 "ceph.osd_id": "0",
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:                 "ceph.type": "block",
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:                 "ceph.vdo": "0",
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:                 "ceph.with_tpm": "0"
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:             },
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:             "type": "block",
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:             "vg_name": "ceph_vg0"
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:         }
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]:     ]
Nov 23 20:40:28 compute-1 tender_aryabhata[76993]: }
Nov 23 20:40:28 compute-1 systemd[1]: libpod-447f99d2af739880bbbdced1fce3ac57387decd2167435e8748dd6ad4027344b.scope: Deactivated successfully.
Nov 23 20:40:28 compute-1 podman[76977]: 2025-11-23 20:40:28.249178997 +0000 UTC m=+0.412482382 container died 447f99d2af739880bbbdced1fce3ac57387decd2167435e8748dd6ad4027344b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_aryabhata, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 23 20:40:28 compute-1 systemd[1]: var-lib-containers-storage-overlay-c7321642969a322c229c3dd76362bb360881a65d749dcf88d0648864130c0ac1-merged.mount: Deactivated successfully.
Nov 23 20:40:28 compute-1 podman[76977]: 2025-11-23 20:40:28.316552786 +0000 UTC m=+0.479856171 container remove 447f99d2af739880bbbdced1fce3ac57387decd2167435e8748dd6ad4027344b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_aryabhata, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 20:40:28 compute-1 systemd[1]: libpod-conmon-447f99d2af739880bbbdced1fce3ac57387decd2167435e8748dd6ad4027344b.scope: Deactivated successfully.
Nov 23 20:40:28 compute-1 sudo[76873]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:28 compute-1 sudo[77013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:40:28 compute-1 sudo[77013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:28 compute-1 sudo[77013]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:28 compute-1 sudo[77038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:40:28 compute-1 sudo[77038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:28 compute-1 podman[77104]: 2025-11-23 20:40:28.86299049 +0000 UTC m=+0.070179089 container create 4c8a0dfcac12190cd012579bb8129bb8ecdf3cd449ac4c589b4cc40fb9e8c553 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_agnesi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 20:40:28 compute-1 podman[77104]: 2025-11-23 20:40:28.81396071 +0000 UTC m=+0.021149359 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:40:28 compute-1 systemd[1]: Started libpod-conmon-4c8a0dfcac12190cd012579bb8129bb8ecdf3cd449ac4c589b4cc40fb9e8c553.scope.
Nov 23 20:40:29 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:40:29 compute-1 podman[77104]: 2025-11-23 20:40:29.041711059 +0000 UTC m=+0.248899778 container init 4c8a0dfcac12190cd012579bb8129bb8ecdf3cd449ac4c589b4cc40fb9e8c553 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 20:40:29 compute-1 podman[77104]: 2025-11-23 20:40:29.049407241 +0000 UTC m=+0.256595840 container start 4c8a0dfcac12190cd012579bb8129bb8ecdf3cd449ac4c589b4cc40fb9e8c553 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_agnesi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 20:40:29 compute-1 infallible_agnesi[77121]: 167 167
Nov 23 20:40:29 compute-1 systemd[1]: libpod-4c8a0dfcac12190cd012579bb8129bb8ecdf3cd449ac4c589b4cc40fb9e8c553.scope: Deactivated successfully.
Nov 23 20:40:29 compute-1 podman[77104]: 2025-11-23 20:40:29.066380889 +0000 UTC m=+0.273569538 container attach 4c8a0dfcac12190cd012579bb8129bb8ecdf3cd449ac4c589b4cc40fb9e8c553 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 20:40:29 compute-1 podman[77104]: 2025-11-23 20:40:29.066841912 +0000 UTC m=+0.274030551 container died 4c8a0dfcac12190cd012579bb8129bb8ecdf3cd449ac4c589b4cc40fb9e8c553 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_agnesi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 20:40:29 compute-1 systemd[1]: var-lib-containers-storage-overlay-fa11f2b0bb9856b962f226fa9a2c8966defd92bee00dc499e59a86acbf64a777-merged.mount: Deactivated successfully.
Nov 23 20:40:29 compute-1 podman[77104]: 2025-11-23 20:40:29.141757866 +0000 UTC m=+0.348946485 container remove 4c8a0dfcac12190cd012579bb8129bb8ecdf3cd449ac4c589b4cc40fb9e8c553 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_agnesi, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 23 20:40:29 compute-1 systemd[1]: libpod-conmon-4c8a0dfcac12190cd012579bb8129bb8ecdf3cd449ac4c589b4cc40fb9e8c553.scope: Deactivated successfully.
Nov 23 20:40:29 compute-1 podman[77154]: 2025-11-23 20:40:29.372082669 +0000 UTC m=+0.049341699 container create d83643ea7da0fdebc8408e4c615efdeb873abb8403827507f96c914fdbe69b75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate-test, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 20:40:29 compute-1 systemd[1]: Started libpod-conmon-d83643ea7da0fdebc8408e4c615efdeb873abb8403827507f96c914fdbe69b75.scope.
Nov 23 20:40:29 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:40:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc16f4d2876c96cee2e6a1149bfa4567b2ca6ac854e3cb6c5798e0539b9192bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:29 compute-1 podman[77154]: 2025-11-23 20:40:29.343244121 +0000 UTC m=+0.020503161 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:40:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc16f4d2876c96cee2e6a1149bfa4567b2ca6ac854e3cb6c5798e0539b9192bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc16f4d2876c96cee2e6a1149bfa4567b2ca6ac854e3cb6c5798e0539b9192bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc16f4d2876c96cee2e6a1149bfa4567b2ca6ac854e3cb6c5798e0539b9192bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc16f4d2876c96cee2e6a1149bfa4567b2ca6ac854e3cb6c5798e0539b9192bd/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:29 compute-1 podman[77154]: 2025-11-23 20:40:29.450411242 +0000 UTC m=+0.127670282 container init d83643ea7da0fdebc8408e4c615efdeb873abb8403827507f96c914fdbe69b75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate-test, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 20:40:29 compute-1 podman[77154]: 2025-11-23 20:40:29.458214487 +0000 UTC m=+0.135473497 container start d83643ea7da0fdebc8408e4c615efdeb873abb8403827507f96c914fdbe69b75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate-test, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 23 20:40:29 compute-1 podman[77154]: 2025-11-23 20:40:29.473025593 +0000 UTC m=+0.150284603 container attach d83643ea7da0fdebc8408e4c615efdeb873abb8403827507f96c914fdbe69b75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate-test, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 20:40:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate-test[77171]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Nov 23 20:40:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate-test[77171]:                             [--no-systemd] [--no-tmpfs]
Nov 23 20:40:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate-test[77171]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 23 20:40:29 compute-1 systemd[1]: libpod-d83643ea7da0fdebc8408e4c615efdeb873abb8403827507f96c914fdbe69b75.scope: Deactivated successfully.
Nov 23 20:40:29 compute-1 podman[77154]: 2025-11-23 20:40:29.641612661 +0000 UTC m=+0.318871671 container died d83643ea7da0fdebc8408e4c615efdeb873abb8403827507f96c914fdbe69b75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 20:40:29 compute-1 systemd[1]: var-lib-containers-storage-overlay-bc16f4d2876c96cee2e6a1149bfa4567b2ca6ac854e3cb6c5798e0539b9192bd-merged.mount: Deactivated successfully.
Nov 23 20:40:29 compute-1 podman[77154]: 2025-11-23 20:40:29.828505895 +0000 UTC m=+0.505764905 container remove d83643ea7da0fdebc8408e4c615efdeb873abb8403827507f96c914fdbe69b75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate-test, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 23 20:40:29 compute-1 systemd[1]: libpod-conmon-d83643ea7da0fdebc8408e4c615efdeb873abb8403827507f96c914fdbe69b75.scope: Deactivated successfully.
Nov 23 20:40:30 compute-1 systemd[1]: Reloading.
Nov 23 20:40:30 compute-1 systemd-rc-local-generator[77233]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:40:30 compute-1 systemd-sysv-generator[77236]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:40:30 compute-1 systemd[1]: Reloading.
Nov 23 20:40:30 compute-1 systemd-rc-local-generator[77273]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:40:30 compute-1 systemd-sysv-generator[77277]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:40:30 compute-1 systemd[1]: Starting Ceph osd.0 for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 20:40:30 compute-1 podman[77331]: 2025-11-23 20:40:30.751758297 +0000 UTC m=+0.054758196 container create 518994d162ec21eecce77a9361b1e108ac4593a01aa5840d8a3f905d288097b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 20:40:30 compute-1 podman[77331]: 2025-11-23 20:40:30.71639466 +0000 UTC m=+0.019394589 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:40:31 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:40:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4676136bc24d41b20a469446a6c0b1fe08819aa93e98fd8876dadc8ae19fdd75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4676136bc24d41b20a469446a6c0b1fe08819aa93e98fd8876dadc8ae19fdd75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4676136bc24d41b20a469446a6c0b1fe08819aa93e98fd8876dadc8ae19fdd75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4676136bc24d41b20a469446a6c0b1fe08819aa93e98fd8876dadc8ae19fdd75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4676136bc24d41b20a469446a6c0b1fe08819aa93e98fd8876dadc8ae19fdd75/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:31 compute-1 podman[77331]: 2025-11-23 20:40:31.063169962 +0000 UTC m=+0.366169871 container init 518994d162ec21eecce77a9361b1e108ac4593a01aa5840d8a3f905d288097b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 23 20:40:31 compute-1 podman[77331]: 2025-11-23 20:40:31.068561307 +0000 UTC m=+0.371561206 container start 518994d162ec21eecce77a9361b1e108ac4593a01aa5840d8a3f905d288097b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 20:40:31 compute-1 podman[77331]: 2025-11-23 20:40:31.072054078 +0000 UTC m=+0.375053977 container attach 518994d162ec21eecce77a9361b1e108ac4593a01aa5840d8a3f905d288097b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Nov 23 20:40:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate[77346]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 20:40:31 compute-1 bash[77331]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 20:40:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate[77346]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 20:40:31 compute-1 bash[77331]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 20:40:31 compute-1 lvm[77427]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 20:40:31 compute-1 lvm[77427]: VG ceph_vg0 finished
Nov 23 20:40:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate[77346]: --> Failed to activate via raw: did not find any matching OSD to activate
Nov 23 20:40:31 compute-1 bash[77331]: --> Failed to activate via raw: did not find any matching OSD to activate
Nov 23 20:40:31 compute-1 bash[77331]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 20:40:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate[77346]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 20:40:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate[77346]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 20:40:31 compute-1 bash[77331]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 20:40:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate[77346]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 23 20:40:31 compute-1 bash[77331]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 23 20:40:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate[77346]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 23 20:40:31 compute-1 bash[77331]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 23 20:40:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate[77346]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 23 20:40:32 compute-1 bash[77331]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 23 20:40:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate[77346]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 23 20:40:32 compute-1 bash[77331]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 23 20:40:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate[77346]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 23 20:40:32 compute-1 bash[77331]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 23 20:40:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate[77346]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 23 20:40:32 compute-1 bash[77331]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 23 20:40:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate[77346]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 23 20:40:32 compute-1 bash[77331]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 23 20:40:32 compute-1 systemd[1]: libpod-518994d162ec21eecce77a9361b1e108ac4593a01aa5840d8a3f905d288097b5.scope: Deactivated successfully.
Nov 23 20:40:32 compute-1 systemd[1]: libpod-518994d162ec21eecce77a9361b1e108ac4593a01aa5840d8a3f905d288097b5.scope: Consumed 1.230s CPU time.
Nov 23 20:40:32 compute-1 podman[77538]: 2025-11-23 20:40:32.307855906 +0000 UTC m=+0.023508877 container died 518994d162ec21eecce77a9361b1e108ac4593a01aa5840d8a3f905d288097b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 23 20:40:32 compute-1 systemd[1]: var-lib-containers-storage-overlay-4676136bc24d41b20a469446a6c0b1fe08819aa93e98fd8876dadc8ae19fdd75-merged.mount: Deactivated successfully.
Nov 23 20:40:32 compute-1 podman[77538]: 2025-11-23 20:40:32.346208809 +0000 UTC m=+0.061861780 container remove 518994d162ec21eecce77a9361b1e108ac4593a01aa5840d8a3f905d288097b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 23 20:40:32 compute-1 podman[77596]: 2025-11-23 20:40:32.52357612 +0000 UTC m=+0.042375630 container create 0867d5176dde679eedb1159c9fe9b63acffcda07d2fe1b063e26d75a6a0e4fdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 23 20:40:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf960282a8df093e930d2c4ef14eda295e8c6a582da998673935a8c565d60ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf960282a8df093e930d2c4ef14eda295e8c6a582da998673935a8c565d60ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf960282a8df093e930d2c4ef14eda295e8c6a582da998673935a8c565d60ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf960282a8df093e930d2c4ef14eda295e8c6a582da998673935a8c565d60ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf960282a8df093e930d2c4ef14eda295e8c6a582da998673935a8c565d60ad/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:32 compute-1 podman[77596]: 2025-11-23 20:40:32.594272313 +0000 UTC m=+0.113071853 container init 0867d5176dde679eedb1159c9fe9b63acffcda07d2fe1b063e26d75a6a0e4fdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 20:40:32 compute-1 podman[77596]: 2025-11-23 20:40:32.501603348 +0000 UTC m=+0.020402878 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:40:32 compute-1 podman[77596]: 2025-11-23 20:40:32.602206061 +0000 UTC m=+0.121005571 container start 0867d5176dde679eedb1159c9fe9b63acffcda07d2fe1b063e26d75a6a0e4fdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 20:40:32 compute-1 bash[77596]: 0867d5176dde679eedb1159c9fe9b63acffcda07d2fe1b063e26d75a6a0e4fdc
Nov 23 20:40:32 compute-1 systemd[1]: Started Ceph osd.0 for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 20:40:32 compute-1 ceph-osd[77613]: set uid:gid to 167:167 (ceph:ceph)
Nov 23 20:40:32 compute-1 ceph-osd[77613]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Nov 23 20:40:32 compute-1 ceph-osd[77613]: pidfile_write: ignore empty --pid-file
Nov 23 20:40:32 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 20:40:32 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 20:40:32 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 20:40:32 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 20:40:32 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) close
Nov 23 20:40:32 compute-1 sudo[77038]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:32 compute-1 sudo[77625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:40:32 compute-1 sudo[77625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:32 compute-1 sudo[77625]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:32 compute-1 sudo[77650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -- raw list --format json
Nov 23 20:40:32 compute-1 sudo[77650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:32 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 20:40:32 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 20:40:32 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 20:40:32 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 20:40:32 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) close
Nov 23 20:40:32 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 20:40:32 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 20:40:32 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 20:40:32 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 20:40:32 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) close
Nov 23 20:40:32 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 20:40:32 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 20:40:32 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 20:40:32 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 20:40:32 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) close
Nov 23 20:40:32 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 20:40:32 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 20:40:32 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 20:40:32 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 20:40:32 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) close
Nov 23 20:40:33 compute-1 podman[77727]: 2025-11-23 20:40:33.207810627 +0000 UTC m=+0.049855575 container create d0dbd55ce580a48eabdfa516c2e0422cc9f1e2504766341b21245d21fa057517 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 20:40:33 compute-1 systemd[1]: Started libpod-conmon-d0dbd55ce580a48eabdfa516c2e0422cc9f1e2504766341b21245d21fa057517.scope.
Nov 23 20:40:33 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 20:40:33 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 20:40:33 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 20:40:33 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 20:40:33 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 23 20:40:33 compute-1 ceph-osd[77613]: bdev(0x558058fe1c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 20:40:33 compute-1 ceph-osd[77613]: bdev(0x558058fe1c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 20:40:33 compute-1 ceph-osd[77613]: bdev(0x558058fe1c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 20:40:33 compute-1 ceph-osd[77613]: bdev(0x558058fe1c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 20:40:33 compute-1 ceph-osd[77613]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 23 20:40:33 compute-1 ceph-osd[77613]: bdev(0x558058fe1c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 23 20:40:33 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:40:33 compute-1 podman[77727]: 2025-11-23 20:40:33.18671289 +0000 UTC m=+0.028757868 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:40:33 compute-1 podman[77727]: 2025-11-23 20:40:33.290689451 +0000 UTC m=+0.132734429 container init d0dbd55ce580a48eabdfa516c2e0422cc9f1e2504766341b21245d21fa057517 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_diffie, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 20:40:33 compute-1 podman[77727]: 2025-11-23 20:40:33.296938431 +0000 UTC m=+0.138983369 container start d0dbd55ce580a48eabdfa516c2e0422cc9f1e2504766341b21245d21fa057517 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1)
Nov 23 20:40:33 compute-1 elated_diffie[77744]: 167 167
Nov 23 20:40:33 compute-1 systemd[1]: libpod-d0dbd55ce580a48eabdfa516c2e0422cc9f1e2504766341b21245d21fa057517.scope: Deactivated successfully.
Nov 23 20:40:33 compute-1 podman[77727]: 2025-11-23 20:40:33.345165037 +0000 UTC m=+0.187209985 container attach d0dbd55ce580a48eabdfa516c2e0422cc9f1e2504766341b21245d21fa057517 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 20:40:33 compute-1 podman[77727]: 2025-11-23 20:40:33.346679031 +0000 UTC m=+0.188723999 container died d0dbd55ce580a48eabdfa516c2e0422cc9f1e2504766341b21245d21fa057517 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 23 20:40:33 compute-1 systemd[1]: var-lib-containers-storage-overlay-443077b0fa3c2450d97ed42fa89cf2cc681443c83056b5526a19a0d6a3778b37-merged.mount: Deactivated successfully.
Nov 23 20:40:33 compute-1 podman[77727]: 2025-11-23 20:40:33.491414082 +0000 UTC m=+0.333459030 container remove d0dbd55ce580a48eabdfa516c2e0422cc9f1e2504766341b21245d21fa057517 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_diffie, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 20:40:33 compute-1 systemd[1]: libpod-conmon-d0dbd55ce580a48eabdfa516c2e0422cc9f1e2504766341b21245d21fa057517.scope: Deactivated successfully.
Nov 23 20:40:33 compute-1 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) close
Nov 23 20:40:33 compute-1 podman[77772]: 2025-11-23 20:40:33.635182128 +0000 UTC m=+0.037758137 container create 0460ab23e3c04091a55df3aae3bff6d20ceec6a058103951e824f32ee63c5e45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 23 20:40:33 compute-1 systemd[1]: Started libpod-conmon-0460ab23e3c04091a55df3aae3bff6d20ceec6a058103951e824f32ee63c5e45.scope.
Nov 23 20:40:33 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:40:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03db5adf52abf5a5af50c6a1537fea671d64b17e49c505d3dce7e4fa318500bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03db5adf52abf5a5af50c6a1537fea671d64b17e49c505d3dce7e4fa318500bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03db5adf52abf5a5af50c6a1537fea671d64b17e49c505d3dce7e4fa318500bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03db5adf52abf5a5af50c6a1537fea671d64b17e49c505d3dce7e4fa318500bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:33 compute-1 podman[77772]: 2025-11-23 20:40:33.705956923 +0000 UTC m=+0.108532962 container init 0460ab23e3c04091a55df3aae3bff6d20ceec6a058103951e824f32ee63c5e45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 20:40:33 compute-1 podman[77772]: 2025-11-23 20:40:33.617579102 +0000 UTC m=+0.020155131 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:40:33 compute-1 podman[77772]: 2025-11-23 20:40:33.713641074 +0000 UTC m=+0.116217083 container start 0460ab23e3c04091a55df3aae3bff6d20ceec6a058103951e824f32ee63c5e45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_rhodes, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325)
Nov 23 20:40:33 compute-1 podman[77772]: 2025-11-23 20:40:33.71768985 +0000 UTC m=+0.120265869 container attach 0460ab23e3c04091a55df3aae3bff6d20ceec6a058103951e824f32ee63c5e45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 20:40:33 compute-1 ceph-osd[77613]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Nov 23 20:40:33 compute-1 ceph-osd[77613]: load: jerasure load: lrc 
Nov 23 20:40:33 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 20:40:33 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 20:40:33 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 20:40:33 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 20:40:33 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 23 20:40:33 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) close
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) close
Nov 23 20:40:34 compute-1 lvm[77870]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 20:40:34 compute-1 lvm[77870]: VG ceph_vg0 finished
Nov 23 20:40:34 compute-1 ceph-osd[77613]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 23 20:40:34 compute-1 ceph-osd[77613]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) close
Nov 23 20:40:34 compute-1 eager_rhodes[77788]: {}
Nov 23 20:40:34 compute-1 systemd[1]: libpod-0460ab23e3c04091a55df3aae3bff6d20ceec6a058103951e824f32ee63c5e45.scope: Deactivated successfully.
Nov 23 20:40:34 compute-1 podman[77772]: 2025-11-23 20:40:34.402886775 +0000 UTC m=+0.805462784 container died 0460ab23e3c04091a55df3aae3bff6d20ceec6a058103951e824f32ee63c5e45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 20:40:34 compute-1 systemd[1]: libpod-0460ab23e3c04091a55df3aae3bff6d20ceec6a058103951e824f32ee63c5e45.scope: Consumed 1.020s CPU time.
Nov 23 20:40:34 compute-1 systemd[1]: var-lib-containers-storage-overlay-03db5adf52abf5a5af50c6a1537fea671d64b17e49c505d3dce7e4fa318500bf-merged.mount: Deactivated successfully.
Nov 23 20:40:34 compute-1 podman[77772]: 2025-11-23 20:40:34.449233569 +0000 UTC m=+0.851809578 container remove 0460ab23e3c04091a55df3aae3bff6d20ceec6a058103951e824f32ee63c5e45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_rhodes, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 23 20:40:34 compute-1 systemd[1]: libpod-conmon-0460ab23e3c04091a55df3aae3bff6d20ceec6a058103951e824f32ee63c5e45.scope: Deactivated successfully.
Nov 23 20:40:34 compute-1 sudo[77650]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) close
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) close
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7d000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7d000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7d000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7d000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluefs mount
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluefs mount shared_bdev_used = 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: RocksDB version: 7.9.2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Git sha 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: DB SUMMARY
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: DB Session ID:  T5EFLYR04KJJ2CJAS6UC
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: CURRENT file:  CURRENT
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: IDENTITY file:  IDENTITY
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                         Options.error_if_exists: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.create_if_missing: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                         Options.paranoid_checks: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                                     Options.env: 0x558059e4ddc0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                                Options.info_log: 0x558059e517a0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.max_file_opening_threads: 16
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                              Options.statistics: (nil)
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                               Options.use_fsync: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.max_log_file_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                         Options.allow_fallocate: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.use_direct_reads: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.create_missing_column_families: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                              Options.db_log_dir: 
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                                 Options.wal_dir: db.wal
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.advise_random_on_open: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.write_buffer_manager: 0x558059f46a00
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                            Options.rate_limiter: (nil)
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.unordered_write: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                               Options.row_cache: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                              Options.wal_filter: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.allow_ingest_behind: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.two_write_queues: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.manual_wal_flush: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.wal_compression: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.atomic_flush: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.log_readahead_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.allow_data_in_errors: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.db_host_id: __hostname__
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.max_background_jobs: 4
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.max_background_compactions: -1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.max_subcompactions: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.max_open_files: -1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.bytes_per_sync: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.max_background_flushes: -1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Compression algorithms supported:
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         kZSTD supported: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         kXpressCompression supported: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         kBZip2Compression supported: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         kLZ4Compression supported: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         kZlibCompression supported: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         kLZ4HCCompression supported: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         kSnappyCompression supported: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558059077350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558059077350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558059077350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558059077350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558059077350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558059077350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558059077350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51b80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5580590769b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51b80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5580590769b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51b80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5580590769b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: adc772d2-7d85-4926-b23a-f9f15aa731bb
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930434707770, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930434707968, "job": 1, "event": "recovery_finished"}
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: freelist init
Nov 23 20:40:34 compute-1 ceph-osd[77613]: freelist _read_cfg
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluefs umount
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7d000 /var/lib/ceph/osd/ceph-0/block) close
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7d000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7d000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7d000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bdev(0x558059e7d000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluefs mount
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluefs mount shared_bdev_used = 4718592
Nov 23 20:40:34 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: RocksDB version: 7.9.2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Git sha 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: DB SUMMARY
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: DB Session ID:  T5EFLYR04KJJ2CJAS6UD
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: CURRENT file:  CURRENT
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: IDENTITY file:  IDENTITY
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                         Options.error_if_exists: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.create_if_missing: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                         Options.paranoid_checks: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                                     Options.env: 0x558059fea310
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                                Options.info_log: 0x558059e51920
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.max_file_opening_threads: 16
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                              Options.statistics: (nil)
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                               Options.use_fsync: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.max_log_file_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                         Options.allow_fallocate: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.use_direct_reads: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.create_missing_column_families: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                              Options.db_log_dir: 
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                                 Options.wal_dir: db.wal
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.advise_random_on_open: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.write_buffer_manager: 0x558059f46a00
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                            Options.rate_limiter: (nil)
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.unordered_write: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                               Options.row_cache: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                              Options.wal_filter: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.allow_ingest_behind: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.two_write_queues: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.manual_wal_flush: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.wal_compression: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.atomic_flush: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.log_readahead_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.allow_data_in_errors: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.db_host_id: __hostname__
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.max_background_jobs: 4
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.max_background_compactions: -1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.max_subcompactions: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.max_open_files: -1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.bytes_per_sync: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.max_background_flushes: -1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Compression algorithms supported:
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         kZSTD supported: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         kXpressCompression supported: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         kBZip2Compression supported: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         kLZ4Compression supported: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         kZlibCompression supported: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         kLZ4HCCompression supported: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         kSnappyCompression supported: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558059077350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558059077350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558059077350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558059077350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558059077350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558059077350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558059077350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51ac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5580590769b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51ac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5580590769b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51ac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5580590769b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: adc772d2-7d85-4926-b23a-f9f15aa731bb
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930434972448, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930434979718, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930434, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "adc772d2-7d85-4926-b23a-f9f15aa731bb", "db_session_id": "T5EFLYR04KJJ2CJAS6UD", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930434983654, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930434, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "adc772d2-7d85-4926-b23a-f9f15aa731bb", "db_session_id": "T5EFLYR04KJJ2CJAS6UD", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930434986577, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930434, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "adc772d2-7d85-4926-b23a-f9f15aa731bb", "db_session_id": "T5EFLYR04KJJ2CJAS6UD", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930434988962, "job": 1, "event": "recovery_finished"}
Nov 23 20:40:34 compute-1 ceph-osd[77613]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 23 20:40:35 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55805a04e000
Nov 23 20:40:35 compute-1 ceph-osd[77613]: rocksdb: DB pointer 0x558059ff8000
Nov 23 20:40:35 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 23 20:40:35 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Nov 23 20:40:35 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Nov 23 20:40:35 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 20:40:35 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5580590769b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5580590769b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5580590769b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 23 20:40:35 compute-1 ceph-osd[77613]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 23 20:40:35 compute-1 ceph-osd[77613]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 23 20:40:35 compute-1 ceph-osd[77613]: _get_class not permitted to load lua
Nov 23 20:40:35 compute-1 ceph-osd[77613]: _get_class not permitted to load sdk
Nov 23 20:40:35 compute-1 ceph-osd[77613]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 23 20:40:35 compute-1 ceph-osd[77613]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 23 20:40:35 compute-1 ceph-osd[77613]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 23 20:40:35 compute-1 ceph-osd[77613]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 23 20:40:35 compute-1 ceph-osd[77613]: osd.0 0 load_pgs
Nov 23 20:40:35 compute-1 ceph-osd[77613]: osd.0 0 load_pgs opened 0 pgs
Nov 23 20:40:35 compute-1 ceph-osd[77613]: osd.0 0 log_to_monitors true
Nov 23 20:40:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0[77609]: 2025-11-23T20:40:35.019+0000 7fc2071c1740 -1 osd.0 0 log_to_monitors true
Nov 23 20:40:35 compute-1 sudo[78300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 20:40:35 compute-1 sudo[78300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:35 compute-1 sudo[78300]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:35 compute-1 sudo[78325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:40:35 compute-1 sudo[78325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:35 compute-1 sudo[78325]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:35 compute-1 sudo[78350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 23 20:40:35 compute-1 sudo[78350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:36 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 23 20:40:36 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 23 20:40:36 compute-1 podman[78448]: 2025-11-23 20:40:36.764449598 +0000 UTC m=+0.966926487 container exec e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 23 20:40:36 compute-1 podman[78448]: 2025-11-23 20:40:36.865471723 +0000 UTC m=+1.067948612 container exec_died e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 23 20:40:37 compute-1 ceph-osd[77613]: osd.0 0 done with init, starting boot process
Nov 23 20:40:37 compute-1 ceph-osd[77613]: osd.0 0 start_boot
Nov 23 20:40:37 compute-1 ceph-osd[77613]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 23 20:40:37 compute-1 ceph-osd[77613]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 23 20:40:37 compute-1 ceph-osd[77613]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 23 20:40:37 compute-1 ceph-osd[77613]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 23 20:40:37 compute-1 ceph-osd[77613]: osd.0 0  bench count 12288000 bsize 4 KiB
Nov 23 20:40:37 compute-1 sshd-session[78377]: Invalid user user from 196.191.142.67 port 41530
Nov 23 20:40:37 compute-1 sshd-session[78377]: Connection closed by invalid user user 196.191.142.67 port 41530 [preauth]
Nov 23 20:40:38 compute-1 sudo[78350]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:38 compute-1 sudo[78504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:40:38 compute-1 sudo[78504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:38 compute-1 sudo[78504]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:38 compute-1 sudo[78529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -- inventory --format=json-pretty --filter-for-batch
Nov 23 20:40:38 compute-1 sudo[78529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:40:39 compute-1 podman[78592]: 2025-11-23 20:40:39.360040143 +0000 UTC m=+0.098534965 container create 4520fe93de4cdb1b889e4a4f68af2e68cd2926571cbba134cc770be56f5c9810 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_pascal, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 23 20:40:39 compute-1 podman[78592]: 2025-11-23 20:40:39.282187153 +0000 UTC m=+0.020681995 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:40:39 compute-1 systemd[1]: Started libpod-conmon-4520fe93de4cdb1b889e4a4f68af2e68cd2926571cbba134cc770be56f5c9810.scope.
Nov 23 20:40:39 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:40:39 compute-1 podman[78592]: 2025-11-23 20:40:39.563020309 +0000 UTC m=+0.301515151 container init 4520fe93de4cdb1b889e4a4f68af2e68cd2926571cbba134cc770be56f5c9810 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_pascal, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 23 20:40:39 compute-1 podman[78592]: 2025-11-23 20:40:39.568755104 +0000 UTC m=+0.307249926 container start 4520fe93de4cdb1b889e4a4f68af2e68cd2926571cbba134cc770be56f5c9810 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 23 20:40:39 compute-1 zen_pascal[78608]: 167 167
Nov 23 20:40:39 compute-1 systemd[1]: libpod-4520fe93de4cdb1b889e4a4f68af2e68cd2926571cbba134cc770be56f5c9810.scope: Deactivated successfully.
Nov 23 20:40:39 compute-1 podman[78592]: 2025-11-23 20:40:39.62358401 +0000 UTC m=+0.362078882 container attach 4520fe93de4cdb1b889e4a4f68af2e68cd2926571cbba134cc770be56f5c9810 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_pascal, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 20:40:39 compute-1 podman[78592]: 2025-11-23 20:40:39.624574169 +0000 UTC m=+0.363068991 container died 4520fe93de4cdb1b889e4a4f68af2e68cd2926571cbba134cc770be56f5c9810 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 20:40:39 compute-1 systemd[1]: var-lib-containers-storage-overlay-fad6957373f9b8d0a72e9360c8724a7ef38145da62dd8b4364e3cbe5256ba5f7-merged.mount: Deactivated successfully.
Nov 23 20:40:40 compute-1 podman[78592]: 2025-11-23 20:40:40.066107207 +0000 UTC m=+0.804602069 container remove 4520fe93de4cdb1b889e4a4f68af2e68cd2926571cbba134cc770be56f5c9810 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 23 20:40:40 compute-1 systemd[1]: libpod-conmon-4520fe93de4cdb1b889e4a4f68af2e68cd2926571cbba134cc770be56f5c9810.scope: Deactivated successfully.
Nov 23 20:40:40 compute-1 podman[78633]: 2025-11-23 20:40:40.245599188 +0000 UTC m=+0.078177268 container create a27fc3ed11e40ce239ee9355521c0485a4a7ae3df95079af43a91660699317af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_elgamal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 23 20:40:40 compute-1 podman[78633]: 2025-11-23 20:40:40.189098483 +0000 UTC m=+0.021676583 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:40:40 compute-1 systemd[1]: Started libpod-conmon-a27fc3ed11e40ce239ee9355521c0485a4a7ae3df95079af43a91660699317af.scope.
Nov 23 20:40:40 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:40:40 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6344a23d2411243019c62c5de6e9256e307f5f6a3061acd1155508beeab150fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:40 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6344a23d2411243019c62c5de6e9256e307f5f6a3061acd1155508beeab150fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:40 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6344a23d2411243019c62c5de6e9256e307f5f6a3061acd1155508beeab150fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:40 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6344a23d2411243019c62c5de6e9256e307f5f6a3061acd1155508beeab150fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 20:40:40 compute-1 podman[78633]: 2025-11-23 20:40:40.449198114 +0000 UTC m=+0.281776214 container init a27fc3ed11e40ce239ee9355521c0485a4a7ae3df95079af43a91660699317af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_elgamal, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 20:40:40 compute-1 podman[78633]: 2025-11-23 20:40:40.457566814 +0000 UTC m=+0.290144894 container start a27fc3ed11e40ce239ee9355521c0485a4a7ae3df95079af43a91660699317af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_elgamal, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 23 20:40:40 compute-1 podman[78633]: 2025-11-23 20:40:40.525953141 +0000 UTC m=+0.358531221 container attach a27fc3ed11e40ce239ee9355521c0485a4a7ae3df95079af43a91660699317af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_elgamal, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]: [
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:     {
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:         "available": false,
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:         "being_replaced": false,
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:         "ceph_device_lvm": false,
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:         "lsm_data": {},
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:         "lvs": [],
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:         "path": "/dev/sr0",
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:         "rejected_reasons": [
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:             "Has a FileSystem",
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:             "Insufficient space (<5GB)"
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:         ],
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:         "sys_api": {
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:             "actuators": null,
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:             "device_nodes": [
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:                 "sr0"
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:             ],
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:             "devname": "sr0",
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:             "human_readable_size": "482.00 KB",
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:             "id_bus": "ata",
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:             "model": "QEMU DVD-ROM",
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:             "nr_requests": "2",
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:             "parent": "/dev/sr0",
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:             "partitions": {},
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:             "path": "/dev/sr0",
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:             "removable": "1",
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:             "rev": "2.5+",
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:             "ro": "0",
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:             "rotational": "1",
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:             "sas_address": "",
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:             "sas_device_handle": "",
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:             "scheduler_mode": "mq-deadline",
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:             "sectors": 0,
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:             "sectorsize": "2048",
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:             "size": 493568.0,
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:             "support_discard": "2048",
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:             "type": "disk",
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:             "vendor": "QEMU"
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:         }
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]:     }
Nov 23 20:40:41 compute-1 mystifying_elgamal[78649]: ]
Nov 23 20:40:41 compute-1 systemd[1]: libpod-a27fc3ed11e40ce239ee9355521c0485a4a7ae3df95079af43a91660699317af.scope: Deactivated successfully.
Nov 23 20:40:41 compute-1 podman[78633]: 2025-11-23 20:40:41.155586967 +0000 UTC m=+0.988165077 container died a27fc3ed11e40ce239ee9355521c0485a4a7ae3df95079af43a91660699317af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_elgamal, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 23 20:40:41 compute-1 systemd[1]: var-lib-containers-storage-overlay-6344a23d2411243019c62c5de6e9256e307f5f6a3061acd1155508beeab150fa-merged.mount: Deactivated successfully.
Nov 23 20:40:41 compute-1 podman[78633]: 2025-11-23 20:40:41.730232673 +0000 UTC m=+1.562810753 container remove a27fc3ed11e40ce239ee9355521c0485a4a7ae3df95079af43a91660699317af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_elgamal, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 20:40:41 compute-1 systemd[1]: libpod-conmon-a27fc3ed11e40ce239ee9355521c0485a4a7ae3df95079af43a91660699317af.scope: Deactivated successfully.
Nov 23 20:40:41 compute-1 sudo[78529]: pam_unix(sudo:session): session closed for user root
Nov 23 20:40:43 compute-1 sshd-session[79801]: Invalid user local from 102.176.81.29 port 40358
Nov 23 20:40:44 compute-1 sshd-session[79801]: Received disconnect from 102.176.81.29 port 40358:11: Bye Bye [preauth]
Nov 23 20:40:44 compute-1 sshd-session[79801]: Disconnected from invalid user local 102.176.81.29 port 40358 [preauth]
Nov 23 20:40:47 compute-1 sshd-session[79803]: Invalid user teamspeak from 43.225.142.116 port 51816
Nov 23 20:40:47 compute-1 sshd-session[79803]: Received disconnect from 43.225.142.116 port 51816:11: Bye Bye [preauth]
Nov 23 20:40:47 compute-1 sshd-session[79803]: Disconnected from invalid user teamspeak 43.225.142.116 port 51816 [preauth]
Nov 23 20:40:48 compute-1 ceph-osd[77613]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 24.492 iops: 6269.861 elapsed_sec: 0.478
Nov 23 20:40:48 compute-1 ceph-osd[77613]: log_channel(cluster) log [WRN] : OSD bench result of 6269.861471 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 23 20:40:48 compute-1 ceph-osd[77613]: osd.0 0 waiting for initial osdmap
Nov 23 20:40:48 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0[77609]: 2025-11-23T20:40:48.206+0000 7fc203957640 -1 osd.0 0 waiting for initial osdmap
Nov 23 20:40:48 compute-1 ceph-osd[77613]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 23 20:40:48 compute-1 ceph-osd[77613]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 23 20:40:48 compute-1 ceph-osd[77613]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 23 20:40:48 compute-1 ceph-osd[77613]: osd.0 8 check_osdmap_features require_osd_release unknown -> squid
Nov 23 20:40:48 compute-1 ceph-osd[77613]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 23 20:40:48 compute-1 ceph-osd[77613]: osd.0 8 set_numa_affinity not setting numa affinity
Nov 23 20:40:48 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0[77609]: 2025-11-23T20:40:48.228+0000 7fc1fe76c640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 23 20:40:48 compute-1 ceph-osd[77613]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Nov 23 20:40:48 compute-1 ceph-osd[77613]: osd.0 9 state: booting -> active
Nov 23 20:40:48 compute-1 ceph-osd[77613]: osd.0 9 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 23 20:40:48 compute-1 ceph-osd[77613]: osd.0 9 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 23 20:40:48 compute-1 ceph-osd[77613]: osd.0 9 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 23 20:40:48 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 9 pg[1.0( empty local-lis/les=0/0 n=0 ec=9/9 lis/c=0/0 les/c/f=0/0/0 sis=9) [0] r=0 lpr=9 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:40:49 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 10 pg[1.0( empty local-lis/les=9/10 n=0 ec=9/9 lis/c=0/0 les/c/f=0/0/0 sis=9) [0] r=0 lpr=9 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:40:49 compute-1 sshd-session[79805]: Received disconnect from 118.145.189.160 port 37848:11: Bye Bye [preauth]
Nov 23 20:40:49 compute-1 sshd-session[79805]: Disconnected from authenticating user root 118.145.189.160 port 37848 [preauth]
Nov 23 20:40:55 compute-1 sshd-session[79807]: Invalid user solv from 161.35.179.103 port 57044
Nov 23 20:40:55 compute-1 sshd-session[79807]: Connection closed by invalid user solv 161.35.179.103 port 57044 [preauth]
Nov 23 20:40:59 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 12 pg[2.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=12) [0] r=0 lpr=12 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:40:59 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 13 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=12) [0] r=0 lpr=12 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:00 compute-1 sshd-session[79809]: Invalid user ethereumdocker from 92.118.39.92 port 35422
Nov 23 20:41:00 compute-1 sshd-session[79809]: Connection closed by invalid user ethereumdocker 92.118.39.92 port 35422 [preauth]
Nov 23 20:41:04 compute-1 sudo[79811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:41:04 compute-1 sudo[79811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:04 compute-1 sudo[79811]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:04 compute-1 sudo[79836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:41:04 compute-1 sudo[79836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:04 compute-1 podman[79902]: 2025-11-23 20:41:04.829790116 +0000 UTC m=+0.043968826 container create bc05ce2cec43290cfea9d42723d0fb4721edd46f5b0965af5795c9957fc251a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 23 20:41:04 compute-1 systemd[1]: Started libpod-conmon-bc05ce2cec43290cfea9d42723d0fb4721edd46f5b0965af5795c9957fc251a9.scope.
Nov 23 20:41:04 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:41:04 compute-1 podman[79902]: 2025-11-23 20:41:04.901821057 +0000 UTC m=+0.115999797 container init bc05ce2cec43290cfea9d42723d0fb4721edd46f5b0965af5795c9957fc251a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_ellis, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 20:41:04 compute-1 podman[79902]: 2025-11-23 20:41:04.810587544 +0000 UTC m=+0.024766274 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:41:04 compute-1 podman[79902]: 2025-11-23 20:41:04.907642074 +0000 UTC m=+0.121820784 container start bc05ce2cec43290cfea9d42723d0fb4721edd46f5b0965af5795c9957fc251a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_ellis, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 23 20:41:04 compute-1 podman[79902]: 2025-11-23 20:41:04.911194447 +0000 UTC m=+0.125373157 container attach bc05ce2cec43290cfea9d42723d0fb4721edd46f5b0965af5795c9957fc251a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_ellis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 23 20:41:04 compute-1 affectionate_ellis[79918]: 167 167
Nov 23 20:41:04 compute-1 systemd[1]: libpod-bc05ce2cec43290cfea9d42723d0fb4721edd46f5b0965af5795c9957fc251a9.scope: Deactivated successfully.
Nov 23 20:41:04 compute-1 podman[79902]: 2025-11-23 20:41:04.911990379 +0000 UTC m=+0.126169089 container died bc05ce2cec43290cfea9d42723d0fb4721edd46f5b0965af5795c9957fc251a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 23 20:41:04 compute-1 systemd[1]: var-lib-containers-storage-overlay-922a57aeb57721a2f458f3450d38cc731f16953c203e8eda355a2cab9b58c1c8-merged.mount: Deactivated successfully.
Nov 23 20:41:04 compute-1 podman[79902]: 2025-11-23 20:41:04.982640642 +0000 UTC m=+0.196819362 container remove bc05ce2cec43290cfea9d42723d0fb4721edd46f5b0965af5795c9957fc251a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_ellis, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 23 20:41:04 compute-1 systemd[1]: libpod-conmon-bc05ce2cec43290cfea9d42723d0fb4721edd46f5b0965af5795c9957fc251a9.scope: Deactivated successfully.
Nov 23 20:41:05 compute-1 podman[79936]: 2025-11-23 20:41:05.031294931 +0000 UTC m=+0.031114797 container create 0d339a0e07800eb18856a0e18c57aa063d4121a570e4eb823c0e2702d57f354d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 20:41:05 compute-1 systemd[1]: Started libpod-conmon-0d339a0e07800eb18856a0e18c57aa063d4121a570e4eb823c0e2702d57f354d.scope.
Nov 23 20:41:05 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:41:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b62b1e6eeaac7927f1a32cf21ca4fe22b949db430f8c53cf62b9bc177295e04/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 20:41:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b62b1e6eeaac7927f1a32cf21ca4fe22b949db430f8c53cf62b9bc177295e04/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Nov 23 20:41:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b62b1e6eeaac7927f1a32cf21ca4fe22b949db430f8c53cf62b9bc177295e04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 20:41:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b62b1e6eeaac7927f1a32cf21ca4fe22b949db430f8c53cf62b9bc177295e04/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Nov 23 20:41:05 compute-1 podman[79936]: 2025-11-23 20:41:05.084892592 +0000 UTC m=+0.084712468 container init 0d339a0e07800eb18856a0e18c57aa063d4121a570e4eb823c0e2702d57f354d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_tesla, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 23 20:41:05 compute-1 podman[79936]: 2025-11-23 20:41:05.092298605 +0000 UTC m=+0.092118471 container start 0d339a0e07800eb18856a0e18c57aa063d4121a570e4eb823c0e2702d57f354d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 20:41:05 compute-1 podman[79936]: 2025-11-23 20:41:05.095367853 +0000 UTC m=+0.095187719 container attach 0d339a0e07800eb18856a0e18c57aa063d4121a570e4eb823c0e2702d57f354d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 20:41:05 compute-1 podman[79936]: 2025-11-23 20:41:05.017153874 +0000 UTC m=+0.016973760 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:41:05 compute-1 systemd[1]: libpod-0d339a0e07800eb18856a0e18c57aa063d4121a570e4eb823c0e2702d57f354d.scope: Deactivated successfully.
Nov 23 20:41:05 compute-1 podman[79936]: 2025-11-23 20:41:05.174222651 +0000 UTC m=+0.174042557 container died 0d339a0e07800eb18856a0e18c57aa063d4121a570e4eb823c0e2702d57f354d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 20:41:05 compute-1 systemd[1]: var-lib-containers-storage-overlay-9b62b1e6eeaac7927f1a32cf21ca4fe22b949db430f8c53cf62b9bc177295e04-merged.mount: Deactivated successfully.
Nov 23 20:41:05 compute-1 podman[79936]: 2025-11-23 20:41:05.216928679 +0000 UTC m=+0.216748545 container remove 0d339a0e07800eb18856a0e18c57aa063d4121a570e4eb823c0e2702d57f354d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 20:41:05 compute-1 systemd[1]: libpod-conmon-0d339a0e07800eb18856a0e18c57aa063d4121a570e4eb823c0e2702d57f354d.scope: Deactivated successfully.
Nov 23 20:41:05 compute-1 systemd[1]: Reloading.
Nov 23 20:41:05 compute-1 systemd-rc-local-generator[80015]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:41:05 compute-1 systemd-sysv-generator[80019]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:41:05 compute-1 systemd[1]: Reloading.
Nov 23 20:41:05 compute-1 systemd-sysv-generator[80063]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:41:05 compute-1 systemd-rc-local-generator[80059]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:41:05 compute-1 systemd[1]: Starting Ceph mon.compute-1 for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 20:41:05 compute-1 podman[80116]: 2025-11-23 20:41:05.960734889 +0000 UTC m=+0.037776228 container create ec83ddfeced6ca540ac5dcb02096fff1389ee87d706a6fb5f966f976b514b52e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 20:41:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c1f5686b6423e7ba841e6680e952139dba21ea00e5fbe7f67fcc68c35cb861/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 20:41:06 compute-1 podman[80116]: 2025-11-23 20:41:05.942563026 +0000 UTC m=+0.019604395 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:41:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c1f5686b6423e7ba841e6680e952139dba21ea00e5fbe7f67fcc68c35cb861/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 20:41:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c1f5686b6423e7ba841e6680e952139dba21ea00e5fbe7f67fcc68c35cb861/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 20:41:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c1f5686b6423e7ba841e6680e952139dba21ea00e5fbe7f67fcc68c35cb861/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Nov 23 20:41:06 compute-1 podman[80116]: 2025-11-23 20:41:06.064302177 +0000 UTC m=+0.141343546 container init ec83ddfeced6ca540ac5dcb02096fff1389ee87d706a6fb5f966f976b514b52e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True)
Nov 23 20:41:06 compute-1 podman[80116]: 2025-11-23 20:41:06.075829139 +0000 UTC m=+0.152870488 container start ec83ddfeced6ca540ac5dcb02096fff1389ee87d706a6fb5f966f976b514b52e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 20:41:06 compute-1 bash[80116]: ec83ddfeced6ca540ac5dcb02096fff1389ee87d706a6fb5f966f976b514b52e
Nov 23 20:41:06 compute-1 systemd[1]: Started Ceph mon.compute-1 for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 20:41:06 compute-1 sudo[79836]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:06 compute-1 ceph-mon[80135]: set uid:gid to 167:167 (ceph:ceph)
Nov 23 20:41:06 compute-1 ceph-mon[80135]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pidfile_write: ignore empty --pid-file
Nov 23 20:41:06 compute-1 ceph-mon[80135]: load: jerasure load: lrc 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: RocksDB version: 7.9.2
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: Git sha 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: DB SUMMARY
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: DB Session ID:  RYN2LDD9QR94TIN0USPF
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: CURRENT file:  CURRENT
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: IDENTITY file:  IDENTITY
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-1/store.db dir, Total Num: 0, files: 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-1/store.db: 000004.log size: 511 ; 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                         Options.error_if_exists: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                       Options.create_if_missing: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                         Options.paranoid_checks: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                                     Options.env: 0x560648be2c20
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                                Options.info_log: 0x560649e32e40
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                Options.max_file_opening_threads: 16
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                              Options.statistics: (nil)
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                               Options.use_fsync: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                       Options.max_log_file_size: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                         Options.allow_fallocate: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                        Options.use_direct_reads: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:          Options.create_missing_column_families: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                              Options.db_log_dir: 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                                 Options.wal_dir: 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                   Options.advise_random_on_open: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                    Options.write_buffer_manager: 0x560649e37900
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                            Options.rate_limiter: (nil)
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                  Options.unordered_write: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                               Options.row_cache: None
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                              Options.wal_filter: None
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:             Options.allow_ingest_behind: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:             Options.two_write_queues: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:             Options.manual_wal_flush: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:             Options.wal_compression: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:             Options.atomic_flush: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                 Options.log_readahead_size: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:             Options.allow_data_in_errors: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:             Options.db_host_id: __hostname__
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:             Options.max_background_jobs: 2
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:             Options.max_background_compactions: -1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:             Options.max_subcompactions: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:             Options.max_total_wal_size: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                          Options.max_open_files: -1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                          Options.bytes_per_sync: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:       Options.compaction_readahead_size: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                  Options.max_background_flushes: -1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: Compression algorithms supported:
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:         kZSTD supported: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:         kXpressCompression supported: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:         kBZip2Compression supported: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:         kLZ4Compression supported: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:         kZlibCompression supported: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:         kLZ4HCCompression supported: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:         kSnappyCompression supported: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:           Options.merge_operator: 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:        Options.compaction_filter: None
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560649e32700)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560649e57350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:        Options.write_buffer_size: 33554432
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:  Options.max_write_buffer_number: 2
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:          Options.compression: NoCompression
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:             Options.num_levels: 7
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                           Options.bloom_locality: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                               Options.ttl: 2592000
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                       Options.enable_blob_files: false
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                           Options.min_blob_size: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930466144960, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930466147052, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930466147258, "job": 1, "event": "recovery_finished"}
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560649e58e00
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: DB pointer 0x560649f62000
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mon.compute-1 does not exist in monmap, will attempt to join an existing cluster
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 20:41:06 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.09 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.09 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560649e57350#2 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.64 KB,0.00012219%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 23 20:41:06 compute-1 ceph-mon[80135]: using public_addr v2:192.168.122.101:0/0 -> [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0]
Nov 23 20:41:06 compute-1 ceph-mon[80135]: starting mon.compute-1 rank -1 at public addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] at bind addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-1 fsid 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mon.compute-1@-1(???) e0 preinit fsid 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mon.compute-1@-1(synchronizing).mds e1 new map
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mon.compute-1@-1(synchronizing).mds e1 print_map
                                           e1
                                           btime 2025-11-23T20:38:56:367641+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e8 e8: 2 total, 1 up, 2 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e9 e9: 2 total, 2 up, 2 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e10 e10: 2 total, 2 up, 2 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e11 e11: 2 total, 2 up, 2 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e12 e12: 2 total, 2 up, 2 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e13 e13: 2 total, 2 up, 2 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e14 e14: 2 total, 2 up, 2 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e15 e15: 2 total, 2 up, 2 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e16 e16: 2 total, 2 up, 2 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e16 crush map has features 3314933000852226048, adjusting msgr requires
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e16 crush map has features 288514051259236352, adjusting msgr requires
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e16 crush map has features 288514051259236352, adjusting msgr requires
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e16 crush map has features 288514051259236352, adjusting msgr requires
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: Updating compute-1:/etc/ceph/ceph.conf
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 20:41:06 compute-1 ceph-mon[80135]: Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 20:41:06 compute-1 ceph-mon[80135]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 23 20:41:06 compute-1 ceph-mon[80135]: Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 20:41:06 compute-1 ceph-mon[80135]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: Deploying daemon crash.compute-1 on compute-1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2074746697' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f9775703-f092-47d3-b1e4-23e694631322"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/459267552' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "71c99843-04fc-447b-a9fd-4e17520a545c"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2074746697' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f9775703-f092-47d3-b1e4-23e694631322"}]': finished
Nov 23 20:41:06 compute-1 ceph-mon[80135]: osdmap e4: 1 total, 0 up, 1 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/459267552' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "71c99843-04fc-447b-a9fd-4e17520a545c"}]': finished
Nov 23 20:41:06 compute-1 ceph-mon[80135]: osdmap e5: 2 total, 0 up, 2 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/803347647' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/4184528919' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3749114053' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: Deploying daemon osd.0 on compute-1
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: Deploying daemon osd.1 on compute-0
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='osd.0 [v2:192.168.122.101:6800/220289678,v1:192.168.122.101:6801/220289678]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='osd.1 [v2:192.168.122.100:6802/2449545263,v1:192.168.122.100:6803/2449545263]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='osd.0 [v2:192.168.122.101:6800/220289678,v1:192.168.122.101:6801/220289678]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='osd.1 [v2:192.168.122.100:6802/2449545263,v1:192.168.122.100:6803/2449545263]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 23 20:41:06 compute-1 ceph-mon[80135]: osdmap e6: 2 total, 0 up, 2 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='osd.1 [v2:192.168.122.100:6802/2449545263,v1:192.168.122.100:6803/2449545263]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='osd.0 [v2:192.168.122.101:6800/220289678,v1:192.168.122.101:6801/220289678]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: purged_snaps scrub starts
Nov 23 20:41:06 compute-1 ceph-mon[80135]: purged_snaps scrub ok
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='osd.1 [v2:192.168.122.100:6802/2449545263,v1:192.168.122.100:6803/2449545263]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='osd.0 [v2:192.168.122.101:6800/220289678,v1:192.168.122.101:6801/220289678]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Nov 23 20:41:06 compute-1 ceph-mon[80135]: osdmap e7: 2 total, 0 up, 2 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: purged_snaps scrub starts
Nov 23 20:41:06 compute-1 ceph-mon[80135]: purged_snaps scrub ok
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: Adjusting osd_memory_target on compute-0 to 127.9M
Nov 23 20:41:06 compute-1 ceph-mon[80135]: Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: Adjusting osd_memory_target on compute-1 to  5248M
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 20:41:06 compute-1 ceph-mon[80135]: OSD bench result of 2031.118864 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: osd.1 [v2:192.168.122.100:6802/2449545263,v1:192.168.122.100:6803/2449545263] boot
Nov 23 20:41:06 compute-1 ceph-mon[80135]: osdmap e8: 2 total, 1 up, 2 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pgmap v46: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 23 20:41:06 compute-1 ceph-mon[80135]: OSD bench result of 6269.861471 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 23 20:41:06 compute-1 ceph-mon[80135]: osd.0 [v2:192.168.122.101:6800/220289678,v1:192.168.122.101:6801/220289678] boot
Nov 23 20:41:06 compute-1 ceph-mon[80135]: osdmap e9: 2 total, 2 up, 2 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 23 20:41:06 compute-1 ceph-mon[80135]: osdmap e10: 2 total, 2 up, 2 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pgmap v49: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 23 20:41:06 compute-1 ceph-mon[80135]: osdmap e11: 2 total, 2 up, 2 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mgrmap e9: compute-0.oyehye(active, since 92s)
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pgmap v51: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pgmap v52: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1340333746' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: Updating compute-2:/etc/ceph/ceph.conf
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1130454146' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1130454146' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 23 20:41:06 compute-1 ceph-mon[80135]: osdmap e12: 2 total, 2 up, 2 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1425917096' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pgmap v56: 2 pgs: 1 active+clean, 1 unknown; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 23 20:41:06 compute-1 ceph-mon[80135]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1425917096' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 23 20:41:06 compute-1 ceph-mon[80135]: osdmap e13: 2 total, 2 up, 2 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/4197123902' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/4197123902' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 23 20:41:06 compute-1 ceph-mon[80135]: osdmap e14: 2 total, 2 up, 2 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:06 compute-1 ceph-mon[80135]: pgmap v59: 4 pgs: 1 active+clean, 3 unknown; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: Deploying daemon mon.compute-2 on compute-2
Nov 23 20:41:06 compute-1 ceph-mon[80135]: osdmap e15: 2 total, 2 up, 2 in
Nov 23 20:41:06 compute-1 ceph-mon[80135]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Nov 23 20:41:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1651014750' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 23 20:41:06 compute-1 ceph-mon[80135]: mon.compute-1@-1(synchronizing).paxosservice(auth 1..7) refresh upgraded, format 0 -> 3
Nov 23 20:41:12 compute-1 ceph-mon[80135]: mon.compute-1@-1(probing) e3  my rank is now 2 (was -1)
Nov 23 20:41:12 compute-1 ceph-mon[80135]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Nov 23 20:41:12 compute-1 ceph-mon[80135]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 23 20:41:12 compute-1 ceph-mon[80135]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 20:41:15 compute-1 ceph-mon[80135]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 20:41:15 compute-1 ceph-mon[80135]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 20:41:15 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 23 20:41:15 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Nov 23 20:41:15 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e17 e17: 2 total, 2 up, 2 in
Nov 23 20:41:15 compute-1 ceph-mon[80135]: Deploying daemon mon.compute-1 on compute-1
Nov 23 20:41:15 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 23 20:41:15 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 23 20:41:15 compute-1 ceph-mon[80135]: mon.compute-0 calling monitor election
Nov 23 20:41:15 compute-1 ceph-mon[80135]: pgmap v63: 5 pgs: 4 active+clean, 1 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 20:41:15 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 23 20:41:15 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 20:41:15 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 23 20:41:15 compute-1 ceph-mon[80135]: mon.compute-2 calling monitor election
Nov 23 20:41:15 compute-1 ceph-mon[80135]: pgmap v64: 5 pgs: 4 active+clean, 1 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 20:41:15 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 20:41:15 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 23 20:41:15 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 20:41:15 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 23 20:41:15 compute-1 ceph-mon[80135]: pgmap v65: 5 pgs: 1 creating+peering, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 20:41:15 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 20:41:15 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 23 20:41:15 compute-1 ceph-mon[80135]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 23 20:41:15 compute-1 ceph-mon[80135]: monmap epoch 2
Nov 23 20:41:15 compute-1 ceph-mon[80135]: fsid 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:41:15 compute-1 ceph-mon[80135]: last_changed 2025-11-23T20:41:04.417115+0000
Nov 23 20:41:15 compute-1 ceph-mon[80135]: created 2025-11-23T20:38:54.371685+0000
Nov 23 20:41:15 compute-1 ceph-mon[80135]: min_mon_release 19 (squid)
Nov 23 20:41:15 compute-1 ceph-mon[80135]: election_strategy: 1
Nov 23 20:41:15 compute-1 ceph-mon[80135]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 23 20:41:15 compute-1 ceph-mon[80135]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Nov 23 20:41:15 compute-1 ceph-mon[80135]: fsmap 
Nov 23 20:41:15 compute-1 ceph-mon[80135]: osdmap e16: 2 total, 2 up, 2 in
Nov 23 20:41:15 compute-1 ceph-mon[80135]: mgrmap e9: compute-0.oyehye(active, since 111s)
Nov 23 20:41:15 compute-1 ceph-mon[80135]: Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Nov 23 20:41:15 compute-1 ceph-mon[80135]: [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Nov 23 20:41:15 compute-1 ceph-mon[80135]:     application not enabled on pool 'vms'
Nov 23 20:41:15 compute-1 ceph-mon[80135]:     application not enabled on pool 'volumes'
Nov 23 20:41:15 compute-1 ceph-mon[80135]:     application not enabled on pool 'backups'
Nov 23 20:41:15 compute-1 ceph-mon[80135]:     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 23 20:41:15 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:15 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:15 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:15 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:15 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.jtkauz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 23 20:41:16 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_auth_request failed to assign global_id
Nov 23 20:41:16 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 20:41:16 compute-1 ceph-mon[80135]: mgrc update_daemon_metadata mon.compute-1 metadata {addrs=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-1,kernel_description=#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025,kernel_version=5.14.0-639.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Nov 23 20:41:16 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e17 _set_new_cache_sizes cache_size:1019939832 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:41:16 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_auth_request failed to assign global_id
Nov 23 20:41:17 compute-1 ceph-mon[80135]: Deploying daemon mgr.compute-2.jtkauz on compute-2
Nov 23 20:41:17 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 23 20:41:17 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 20:41:17 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 23 20:41:17 compute-1 ceph-mon[80135]: mon.compute-0 calling monitor election
Nov 23 20:41:17 compute-1 ceph-mon[80135]: mon.compute-2 calling monitor election
Nov 23 20:41:17 compute-1 ceph-mon[80135]: pgmap v67: 5 pgs: 1 creating+peering, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 20:41:17 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 20:41:17 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 20:41:17 compute-1 ceph-mon[80135]: pgmap v68: 5 pgs: 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 20:41:17 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 20:41:17 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 20:41:17 compute-1 ceph-mon[80135]: pgmap v69: 5 pgs: 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 20:41:17 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 20:41:17 compute-1 ceph-mon[80135]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 23 20:41:17 compute-1 ceph-mon[80135]: monmap epoch 3
Nov 23 20:41:17 compute-1 ceph-mon[80135]: fsid 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:41:17 compute-1 ceph-mon[80135]: last_changed 2025-11-23T20:41:10.249176+0000
Nov 23 20:41:17 compute-1 ceph-mon[80135]: created 2025-11-23T20:38:54.371685+0000
Nov 23 20:41:17 compute-1 ceph-mon[80135]: min_mon_release 19 (squid)
Nov 23 20:41:17 compute-1 ceph-mon[80135]: election_strategy: 1
Nov 23 20:41:17 compute-1 ceph-mon[80135]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 23 20:41:17 compute-1 ceph-mon[80135]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Nov 23 20:41:17 compute-1 ceph-mon[80135]: 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Nov 23 20:41:17 compute-1 ceph-mon[80135]: fsmap 
Nov 23 20:41:17 compute-1 ceph-mon[80135]: osdmap e17: 2 total, 2 up, 2 in
Nov 23 20:41:17 compute-1 ceph-mon[80135]: mgrmap e9: compute-0.oyehye(active, since 118s)
Nov 23 20:41:17 compute-1 ceph-mon[80135]: Health detail: HEALTH_WARN 4 pool(s) do not have an application enabled
Nov 23 20:41:17 compute-1 ceph-mon[80135]: [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled
Nov 23 20:41:17 compute-1 ceph-mon[80135]:     application not enabled on pool 'vms'
Nov 23 20:41:17 compute-1 ceph-mon[80135]:     application not enabled on pool 'volumes'
Nov 23 20:41:17 compute-1 ceph-mon[80135]:     application not enabled on pool 'backups'
Nov 23 20:41:17 compute-1 ceph-mon[80135]:     application not enabled on pool 'images'
Nov 23 20:41:17 compute-1 ceph-mon[80135]:     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 23 20:41:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e18 e18: 2 total, 2 up, 2 in
Nov 23 20:41:18 compute-1 ceph-mon[80135]: mon.compute-1 calling monitor election
Nov 23 20:41:18 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 20:41:18 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:18 compute-1 ceph-mon[80135]: pgmap v70: 5 pgs: 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 20:41:18 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2361136095' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 23 20:41:18 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:18 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 20:41:18 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:18 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 20:41:18 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2361136095' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 23 20:41:18 compute-1 ceph-mon[80135]: osdmap e18: 2 total, 2 up, 2 in
Nov 23 20:41:18 compute-1 sudo[80174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:41:18 compute-1 sudo[80174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:18 compute-1 sudo[80174]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:18 compute-1 sudo[80199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:41:18 compute-1 sudo[80199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:18 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e19 e19: 2 total, 2 up, 2 in
Nov 23 20:41:18 compute-1 podman[80265]: 2025-11-23 20:41:18.873369252 +0000 UTC m=+0.041297056 container create 649722fa42b7c8a8da4772e2df81a986517a43a089dfe3d3096fe3cd19af2124 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 23 20:41:18 compute-1 systemd[1]: Started libpod-conmon-649722fa42b7c8a8da4772e2df81a986517a43a089dfe3d3096fe3cd19af2124.scope.
Nov 23 20:41:18 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:41:18 compute-1 podman[80265]: 2025-11-23 20:41:18.852832318 +0000 UTC m=+0.020760162 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:41:18 compute-1 podman[80265]: 2025-11-23 20:41:18.955558568 +0000 UTC m=+0.123486412 container init 649722fa42b7c8a8da4772e2df81a986517a43a089dfe3d3096fe3cd19af2124 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 20:41:18 compute-1 podman[80265]: 2025-11-23 20:41:18.962793096 +0000 UTC m=+0.130720920 container start 649722fa42b7c8a8da4772e2df81a986517a43a089dfe3d3096fe3cd19af2124 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 20:41:18 compute-1 podman[80265]: 2025-11-23 20:41:18.966085304 +0000 UTC m=+0.134013118 container attach 649722fa42b7c8a8da4772e2df81a986517a43a089dfe3d3096fe3cd19af2124 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 23 20:41:18 compute-1 heuristic_ritchie[80281]: 167 167
Nov 23 20:41:18 compute-1 systemd[1]: libpod-649722fa42b7c8a8da4772e2df81a986517a43a089dfe3d3096fe3cd19af2124.scope: Deactivated successfully.
Nov 23 20:41:18 compute-1 podman[80265]: 2025-11-23 20:41:18.973648922 +0000 UTC m=+0.141576736 container died 649722fa42b7c8a8da4772e2df81a986517a43a089dfe3d3096fe3cd19af2124 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_ritchie, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 20:41:19 compute-1 systemd[1]: var-lib-containers-storage-overlay-a829ade8f18bf6c42712ac8804fc57c680fb1a9af4154c3bdab4920c950cd7ee-merged.mount: Deactivated successfully.
Nov 23 20:41:19 compute-1 podman[80265]: 2025-11-23 20:41:19.021534364 +0000 UTC m=+0.189462188 container remove 649722fa42b7c8a8da4772e2df81a986517a43a089dfe3d3096fe3cd19af2124 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_ritchie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 20:41:19 compute-1 systemd[1]: libpod-conmon-649722fa42b7c8a8da4772e2df81a986517a43a089dfe3d3096fe3cd19af2124.scope: Deactivated successfully.
Nov 23 20:41:19 compute-1 systemd[1]: Reloading.
Nov 23 20:41:19 compute-1 systemd-rc-local-generator[80325]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:41:19 compute-1 systemd-sysv-generator[80328]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:41:19 compute-1 systemd[1]: Reloading.
Nov 23 20:41:19 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:19 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.kgyerp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 23 20:41:19 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.kgyerp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 23 20:41:19 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 20:41:19 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:41:19 compute-1 ceph-mon[80135]: Deploying daemon mgr.compute-1.kgyerp on compute-1
Nov 23 20:41:19 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 23 20:41:19 compute-1 ceph-mon[80135]: osdmap e19: 2 total, 2 up, 2 in
Nov 23 20:41:19 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 20:41:19 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 20:41:19 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3743302872' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 23 20:41:19 compute-1 systemd-rc-local-generator[80362]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:41:19 compute-1 systemd-sysv-generator[80366]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:41:19 compute-1 systemd[1]: Starting Ceph mgr.compute-1.kgyerp for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 20:41:19 compute-1 podman[80421]: 2025-11-23 20:41:19.778919925 +0000 UTC m=+0.033730419 container create 7db62be7e181db03e92260aa0f19556b1d450268d0fe5d51d3beda04ac329e42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 23 20:41:19 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e81940c4a4f3b4624576586d4b63b42882f37fedff4e95d7e8c93cdad4df98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 20:41:19 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e81940c4a4f3b4624576586d4b63b42882f37fedff4e95d7e8c93cdad4df98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 20:41:19 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e81940c4a4f3b4624576586d4b63b42882f37fedff4e95d7e8c93cdad4df98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 20:41:19 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e81940c4a4f3b4624576586d4b63b42882f37fedff4e95d7e8c93cdad4df98/merged/var/lib/ceph/mgr/ceph-compute-1.kgyerp supports timestamps until 2038 (0x7fffffff)
Nov 23 20:41:19 compute-1 podman[80421]: 2025-11-23 20:41:19.840973301 +0000 UTC m=+0.095783795 container init 7db62be7e181db03e92260aa0f19556b1d450268d0fe5d51d3beda04ac329e42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True)
Nov 23 20:41:19 compute-1 podman[80421]: 2025-11-23 20:41:19.845209029 +0000 UTC m=+0.100019523 container start 7db62be7e181db03e92260aa0f19556b1d450268d0fe5d51d3beda04ac329e42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 20:41:19 compute-1 bash[80421]: 7db62be7e181db03e92260aa0f19556b1d450268d0fe5d51d3beda04ac329e42
Nov 23 20:41:19 compute-1 podman[80421]: 2025-11-23 20:41:19.763639923 +0000 UTC m=+0.018450437 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:41:19 compute-1 systemd[1]: Started Ceph mgr.compute-1.kgyerp for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 20:41:19 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e20 e20: 2 total, 2 up, 2 in
Nov 23 20:41:19 compute-1 ceph-mgr[80441]: set uid:gid to 167:167 (ceph:ceph)
Nov 23 20:41:19 compute-1 ceph-mgr[80441]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 23 20:41:19 compute-1 ceph-mgr[80441]: pidfile_write: ignore empty --pid-file
Nov 23 20:41:19 compute-1 sudo[80199]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:19 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 20 pg[7.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:19 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 20 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=20 pruub=11.916127205s) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active pruub 56.793205261s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:19 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 20 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=20 pruub=11.916127205s) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown pruub 56.793205261s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:19 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'alerts'
Nov 23 20:41:20 compute-1 ceph-mgr[80441]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 20:41:20 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'balancer'
Nov 23 20:41:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:20.007+0000 7f6f8aff0140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 20:41:20 compute-1 ceph-mgr[80441]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 20:41:20 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'cephadm'
Nov 23 20:41:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:20.083+0000 7f6f8aff0140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 20:41:20 compute-1 ceph-mon[80135]: pgmap v73: 6 pgs: 1 unknown, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 20:41:20 compute-1 ceph-mon[80135]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 23 20:41:20 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 23 20:41:20 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 20:41:20 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3743302872' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 23 20:41:20 compute-1 ceph-mon[80135]: osdmap e20: 2 total, 2 up, 2 in
Nov 23 20:41:20 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 20:41:20 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:20 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:20 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:20 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:20 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 23 20:41:20 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 23 20:41:20 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:41:20 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'crash'
Nov 23 20:41:20 compute-1 ceph-mgr[80441]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 20:41:20 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'dashboard'
Nov 23 20:41:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:20.871+0000 7f6f8aff0140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 20:41:20 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e21 e21: 2 total, 2 up, 2 in
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1d( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1e( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1c( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1b( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1f( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.a( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.9( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.8( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.7( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.6( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.4( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.2( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.5( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.3( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.b( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.c( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.d( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.e( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.f( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.10( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.11( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.12( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.13( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.14( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.15( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.16( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.17( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.18( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.19( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1a( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1b( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1d( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1e( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1c( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.9( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1f( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.8( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.4( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.7( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.2( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.6( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[7.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.0( empty local-lis/les=20/21 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.a( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.3( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.c( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.b( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.e( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.f( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.5( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.12( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.d( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.14( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.10( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.11( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.15( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.16( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.17( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.13( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1a( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.18( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.19( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:21 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e21 _set_new_cache_sizes cache_size:1020053411 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:41:21 compute-1 ceph-mon[80135]: Deploying daemon crash.compute-2 on compute-2
Nov 23 20:41:21 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/39405231' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 23 20:41:21 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 23 20:41:21 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/39405231' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 23 20:41:21 compute-1 ceph-mon[80135]: osdmap e21: 2 total, 2 up, 2 in
Nov 23 20:41:21 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 20:41:21 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 20:41:21 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 20:41:21 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'devicehealth'
Nov 23 20:41:21 compute-1 ceph-mgr[80441]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 20:41:21 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'diskprediction_local'
Nov 23 20:41:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:21.507+0000 7f6f8aff0140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 20:41:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 23 20:41:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 23 20:41:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]:   from numpy import show_config as show_numpy_config
Nov 23 20:41:21 compute-1 ceph-mgr[80441]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 20:41:21 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'influx'
Nov 23 20:41:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:21.693+0000 7f6f8aff0140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 20:41:21 compute-1 ceph-mgr[80441]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 20:41:21 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'insights'
Nov 23 20:41:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:21.765+0000 7f6f8aff0140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 20:41:21 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'iostat'
Nov 23 20:41:21 compute-1 ceph-mgr[80441]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 20:41:21 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'k8sevents'
Nov 23 20:41:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:21.900+0000 7f6f8aff0140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 20:41:21 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Nov 23 20:41:21 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Nov 23 20:41:21 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e22 e22: 2 total, 2 up, 2 in
Nov 23 20:41:22 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'localpool'
Nov 23 20:41:22 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'mds_autoscaler'
Nov 23 20:41:22 compute-1 ceph-mon[80135]: pgmap v76: 38 pgs: 33 unknown, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 20:41:22 compute-1 ceph-mon[80135]: 2.1e scrub starts
Nov 23 20:41:22 compute-1 ceph-mon[80135]: 2.1e scrub ok
Nov 23 20:41:22 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 23 20:41:22 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 20:41:22 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 20:41:22 compute-1 ceph-mon[80135]: osdmap e22: 2 total, 2 up, 2 in
Nov 23 20:41:22 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1243267938' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 23 20:41:22 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:22 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:22 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:22 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:22 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 20:41:22 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 20:41:22 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:41:22 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 20:41:22 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:41:22 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:22 compute-1 ceph-mon[80135]: Standby manager daemon compute-2.jtkauz started
Nov 23 20:41:22 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'mirroring'
Nov 23 20:41:22 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'nfs'
Nov 23 20:41:22 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.1b deep-scrub starts
Nov 23 20:41:22 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.1b deep-scrub ok
Nov 23 20:41:22 compute-1 ceph-mgr[80441]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 20:41:22 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'orchestrator'
Nov 23 20:41:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:22.933+0000 7f6f8aff0140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 20:41:22 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e23 e23: 2 total, 2 up, 2 in
Nov 23 20:41:23 compute-1 ceph-mgr[80441]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 20:41:23 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'osd_perf_query'
Nov 23 20:41:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:23.169+0000 7f6f8aff0140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 20:41:23 compute-1 ceph-mgr[80441]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 20:41:23 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'osd_support'
Nov 23 20:41:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:23.248+0000 7f6f8aff0140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 20:41:23 compute-1 ceph-mgr[80441]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 20:41:23 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'pg_autoscaler'
Nov 23 20:41:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:23.318+0000 7f6f8aff0140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 20:41:23 compute-1 ceph-mgr[80441]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 20:41:23 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'progress'
Nov 23 20:41:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:23.396+0000 7f6f8aff0140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 20:41:23 compute-1 ceph-mgr[80441]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 20:41:23 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'prometheus'
Nov 23 20:41:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:23.469+0000 7f6f8aff0140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 20:41:23 compute-1 ceph-mgr[80441]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 20:41:23 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'rbd_support'
Nov 23 20:41:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:23.826+0000 7f6f8aff0140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 20:41:23 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.1d deep-scrub starts
Nov 23 20:41:23 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.1d deep-scrub ok
Nov 23 20:41:23 compute-1 ceph-mgr[80441]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 20:41:23 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'restful'
Nov 23 20:41:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:23.935+0000 7f6f8aff0140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 20:41:23 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e24 e24: 2 total, 2 up, 2 in
Nov 23 20:41:24 compute-1 ceph-mon[80135]: 2.1b deep-scrub starts
Nov 23 20:41:24 compute-1 ceph-mon[80135]: 2.1b deep-scrub ok
Nov 23 20:41:24 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1243267938' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 23 20:41:24 compute-1 ceph-mon[80135]: osdmap e23: 2 total, 2 up, 2 in
Nov 23 20:41:24 compute-1 ceph-mon[80135]: pgmap v79: 100 pgs: 32 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 20:41:24 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 20:41:24 compute-1 ceph-mon[80135]: mgrmap e10: compute-0.oyehye(active, since 2m), standbys: compute-2.jtkauz
Nov 23 20:41:24 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr metadata", "who": "compute-2.jtkauz", "id": "compute-2.jtkauz"}]: dispatch
Nov 23 20:41:24 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2261115406' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 23 20:41:24 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'rgw'
Nov 23 20:41:24 compute-1 ceph-mgr[80441]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 20:41:24 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'rook'
Nov 23 20:41:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:24.390+0000 7f6f8aff0140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 20:41:24 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e25 e25: 3 total, 2 up, 3 in
Nov 23 20:41:24 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Nov 23 20:41:24 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Nov 23 20:41:24 compute-1 ceph-mgr[80441]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 20:41:24 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'selftest'
Nov 23 20:41:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:24.952+0000 7f6f8aff0140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 20:41:25 compute-1 ceph-mgr[80441]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 20:41:25 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'snap_schedule'
Nov 23 20:41:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:25.025+0000 7f6f8aff0140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 20:41:25 compute-1 ceph-mon[80135]: 2.1d deep-scrub starts
Nov 23 20:41:25 compute-1 ceph-mon[80135]: 2.1d deep-scrub ok
Nov 23 20:41:25 compute-1 ceph-mon[80135]: 4.1e scrub starts
Nov 23 20:41:25 compute-1 ceph-mon[80135]: 4.1e scrub ok
Nov 23 20:41:25 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 20:41:25 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2261115406' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 23 20:41:25 compute-1 ceph-mon[80135]: osdmap e24: 2 total, 2 up, 2 in
Nov 23 20:41:25 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1014258786' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "89316dd3-297e-4d1b-953e-7f2ac7cbe63c"}]: dispatch
Nov 23 20:41:25 compute-1 ceph-mon[80135]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "89316dd3-297e-4d1b-953e-7f2ac7cbe63c"}]: dispatch
Nov 23 20:41:25 compute-1 ceph-mon[80135]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "89316dd3-297e-4d1b-953e-7f2ac7cbe63c"}]': finished
Nov 23 20:41:25 compute-1 ceph-mon[80135]: osdmap e25: 3 total, 2 up, 3 in
Nov 23 20:41:25 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 20:41:25 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/4110558162' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 23 20:41:25 compute-1 ceph-mgr[80441]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 20:41:25 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'stats'
Nov 23 20:41:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:25.118+0000 7f6f8aff0140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 20:41:25 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'status'
Nov 23 20:41:25 compute-1 ceph-mgr[80441]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 20:41:25 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'telegraf'
Nov 23 20:41:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:25.266+0000 7f6f8aff0140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 20:41:25 compute-1 ceph-mgr[80441]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 20:41:25 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'telemetry'
Nov 23 20:41:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:25.339+0000 7f6f8aff0140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 20:41:25 compute-1 ceph-mgr[80441]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 20:41:25 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'test_orchestrator'
Nov 23 20:41:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:25.499+0000 7f6f8aff0140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 20:41:25 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e26 e26: 3 total, 2 up, 3 in
Nov 23 20:41:25 compute-1 ceph-mgr[80441]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 20:41:25 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'volumes'
Nov 23 20:41:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:25.728+0000 7f6f8aff0140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 20:41:25 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 23 20:41:25 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 23 20:41:25 compute-1 ceph-mgr[80441]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 20:41:25 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'zabbix'
Nov 23 20:41:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:25.990+0000 7f6f8aff0140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 20:41:26 compute-1 ceph-mgr[80441]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 20:41:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:26.061+0000 7f6f8aff0140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 20:41:26 compute-1 ceph-mgr[80441]: ms_deliver_dispatch: unhandled message 0x55f128318d00 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Nov 23 20:41:26 compute-1 ceph-mon[80135]: 2.1c scrub starts
Nov 23 20:41:26 compute-1 ceph-mon[80135]: 2.1c scrub ok
Nov 23 20:41:26 compute-1 ceph-mon[80135]: 3.18 scrub starts
Nov 23 20:41:26 compute-1 ceph-mon[80135]: 3.18 scrub ok
Nov 23 20:41:26 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3255099167' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 23 20:41:26 compute-1 ceph-mon[80135]: pgmap v82: 131 pgs: 31 unknown, 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 20:41:26 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/4110558162' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 23 20:41:26 compute-1 ceph-mon[80135]: osdmap e26: 3 total, 2 up, 3 in
Nov 23 20:41:26 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 20:41:26 compute-1 ceph-mon[80135]: 3.17 scrub starts
Nov 23 20:41:26 compute-1 ceph-mon[80135]: 2.9 scrub starts
Nov 23 20:41:26 compute-1 ceph-mon[80135]: 3.17 scrub ok
Nov 23 20:41:26 compute-1 ceph-mon[80135]: 2.9 scrub ok
Nov 23 20:41:26 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e26 _set_new_cache_sizes cache_size:1020054715 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:41:26 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 23 20:41:26 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 23 20:41:27 compute-1 ceph-mon[80135]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 23 20:41:27 compute-1 ceph-mon[80135]: Standby manager daemon compute-1.kgyerp started
Nov 23 20:41:27 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/54502927' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 23 20:41:27 compute-1 ceph-mon[80135]: 3.19 scrub starts
Nov 23 20:41:27 compute-1 ceph-mon[80135]: 3.19 scrub ok
Nov 23 20:41:27 compute-1 ceph-mon[80135]: 2.2 scrub starts
Nov 23 20:41:27 compute-1 ceph-mon[80135]: 2.2 scrub ok
Nov 23 20:41:27 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:27 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:27 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e27 e27: 3 total, 2 up, 3 in
Nov 23 20:41:27 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.4 deep-scrub starts
Nov 23 20:41:27 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.4 deep-scrub ok
Nov 23 20:41:28 compute-1 ceph-mon[80135]: pgmap v84: 131 pgs: 31 unknown, 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 20:41:28 compute-1 ceph-mon[80135]: mgrmap e11: compute-0.oyehye(active, since 2m), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 20:41:28 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr metadata", "who": "compute-1.kgyerp", "id": "compute-1.kgyerp"}]: dispatch
Nov 23 20:41:28 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/54502927' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 23 20:41:28 compute-1 ceph-mon[80135]: osdmap e27: 3 total, 2 up, 3 in
Nov 23 20:41:28 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 20:41:28 compute-1 ceph-mon[80135]: 3.15 scrub starts
Nov 23 20:41:28 compute-1 ceph-mon[80135]: 3.15 scrub ok
Nov 23 20:41:28 compute-1 ceph-mon[80135]: 2.4 deep-scrub starts
Nov 23 20:41:28 compute-1 ceph-mon[80135]: 2.4 deep-scrub ok
Nov 23 20:41:28 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/330844918' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 23 20:41:28 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Nov 23 20:41:28 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Nov 23 20:41:28 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e28 e28: 3 total, 2 up, 3 in
Nov 23 20:41:29 compute-1 sshd-session[80473]: Invalid user hadoop from 34.91.0.68 port 40178
Nov 23 20:41:29 compute-1 sshd-session[80473]: Received disconnect from 34.91.0.68 port 40178:11: Bye Bye [preauth]
Nov 23 20:41:29 compute-1 sshd-session[80473]: Disconnected from invalid user hadoop 34.91.0.68 port 40178 [preauth]
Nov 23 20:41:29 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.6 deep-scrub starts
Nov 23 20:41:29 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.6 deep-scrub ok
Nov 23 20:41:29 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e29 e29: 3 total, 2 up, 3 in
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.1e( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.203415871s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.151290894s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.9( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225918770s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.173805237s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.1f( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225991249s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.173904419s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.1e( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.203372002s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.151290894s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.1b( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.203423500s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.151290894s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.1f( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225964546s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.173904419s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.a( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.226203918s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.174087524s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.9( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225861549s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.173805237s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.1b( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.203300476s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.151290894s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.a( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.226050377s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.174087524s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.1( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225702286s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.174362183s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.1( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225683212s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.174362183s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.c( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225502968s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.174270630s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.c( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225488663s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.174270630s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.d( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225337029s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.174171448s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.6( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225158691s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.174041748s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.10( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225261688s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.174163818s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.4( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225074768s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.173973083s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.e( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225261688s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.174171448s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.10( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225251198s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.174163818s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.d( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225267410s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.174171448s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.e( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225241661s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.174171448s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.4( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225033760s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.173973083s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.6( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225116730s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.174041748s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.13( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225327492s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.174407959s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.13( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225315094s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.174407959s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.15( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225172997s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.174324036s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.15( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225155830s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.174324036s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.19( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225176811s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.174400330s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.19( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225156784s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.174400330s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:29 compute-1 ceph-mon[80135]: 3.14 deep-scrub starts
Nov 23 20:41:29 compute-1 ceph-mon[80135]: 3.14 deep-scrub ok
Nov 23 20:41:29 compute-1 ceph-mon[80135]: 2.7 scrub starts
Nov 23 20:41:29 compute-1 ceph-mon[80135]: 2.7 scrub ok
Nov 23 20:41:29 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/330844918' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 23 20:41:29 compute-1 ceph-mon[80135]: osdmap e28: 3 total, 2 up, 3 in
Nov 23 20:41:29 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 20:41:29 compute-1 ceph-mon[80135]: pgmap v87: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 20:41:29 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 20:41:29 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 20:41:29 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 20:41:29 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.1f( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.1f( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.11( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.10( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.16( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.15( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.14( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.13( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.15( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.13( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.15( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.11( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.16( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.9( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.f( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.10( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.8( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.e( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.d( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.a( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.9( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.c( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.c( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.a( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.d( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.1( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.5( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.7( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.1( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.5( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.3( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.2( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.9( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.f( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.e( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.e( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.1c( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.1a( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.1a( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.1b( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.1b( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.1a( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.1c( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.1d( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.18( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.18( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.4( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:41:30 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Nov 23 20:41:30 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Nov 23 20:41:30 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e30 e30: 3 total, 2 up, 3 in
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.18( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.1c( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.1b( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.1d( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.1b( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.1c( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.c( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.1a( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.18( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.9( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.f( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.1( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.e( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.1( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.e( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.5( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.3( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.1a( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.7( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.4( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.2( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.5( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.d( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.d( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.a( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.a( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.c( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.8( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.9( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.9( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.e( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.11( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.f( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.16( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.10( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.13( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.15( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.11( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.13( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.15( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.14( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.16( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.15( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.1f( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.10( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.1a( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.1f( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:41:30 compute-1 ceph-mon[80135]: 3.11 scrub starts
Nov 23 20:41:30 compute-1 ceph-mon[80135]: 3.11 scrub ok
Nov 23 20:41:30 compute-1 ceph-mon[80135]: 2.6 deep-scrub starts
Nov 23 20:41:30 compute-1 ceph-mon[80135]: 2.6 deep-scrub ok
Nov 23 20:41:30 compute-1 ceph-mon[80135]: Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Nov 23 20:41:30 compute-1 ceph-mon[80135]: Cluster is now healthy
Nov 23 20:41:30 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 20:41:30 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 20:41:30 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 20:41:30 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 20:41:30 compute-1 ceph-mon[80135]: osdmap e29: 3 total, 2 up, 3 in
Nov 23 20:41:30 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 20:41:30 compute-1 ceph-mon[80135]: osdmap e30: 3 total, 2 up, 3 in
Nov 23 20:41:30 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 20:41:31 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:41:31 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 23 20:41:31 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 23 20:41:32 compute-1 ceph-mon[80135]: 5.1e scrub starts
Nov 23 20:41:32 compute-1 ceph-mon[80135]: 5.1e scrub ok
Nov 23 20:41:32 compute-1 ceph-mon[80135]: 2.8 scrub starts
Nov 23 20:41:32 compute-1 ceph-mon[80135]: 2.8 scrub ok
Nov 23 20:41:32 compute-1 ceph-mon[80135]: pgmap v90: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 20:41:32 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 23 20:41:32 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:41:32 compute-1 ceph-mon[80135]: Deploying daemon osd.2 on compute-2
Nov 23 20:41:32 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.3 deep-scrub starts
Nov 23 20:41:32 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.3 deep-scrub ok
Nov 23 20:41:33 compute-1 ceph-mon[80135]: 4.10 scrub starts
Nov 23 20:41:33 compute-1 ceph-mon[80135]: 4.10 scrub ok
Nov 23 20:41:33 compute-1 ceph-mon[80135]: 2.5 scrub starts
Nov 23 20:41:33 compute-1 ceph-mon[80135]: 2.5 scrub ok
Nov 23 20:41:33 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1120149195' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 23 20:41:33 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1120149195' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 23 20:41:33 compute-1 sshd-session[80475]: Invalid user solv from 161.35.133.66 port 33734
Nov 23 20:41:33 compute-1 sshd-session[80475]: Connection closed by invalid user solv 161.35.133.66 port 33734 [preauth]
Nov 23 20:41:33 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Nov 23 20:41:33 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Nov 23 20:41:34 compute-1 ceph-mon[80135]: 4.11 scrub starts
Nov 23 20:41:34 compute-1 ceph-mon[80135]: 4.11 scrub ok
Nov 23 20:41:34 compute-1 ceph-mon[80135]: 2.3 deep-scrub starts
Nov 23 20:41:34 compute-1 ceph-mon[80135]: 2.3 deep-scrub ok
Nov 23 20:41:34 compute-1 ceph-mon[80135]: pgmap v91: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 20:41:34 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/475116719' entity='client.admin' 
Nov 23 20:41:34 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 23 20:41:35 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 23 20:41:35 compute-1 ceph-mon[80135]: 5.13 scrub starts
Nov 23 20:41:35 compute-1 ceph-mon[80135]: 5.13 scrub ok
Nov 23 20:41:35 compute-1 ceph-mon[80135]: 2.0 scrub starts
Nov 23 20:41:35 compute-1 ceph-mon[80135]: 2.0 scrub ok
Nov 23 20:41:35 compute-1 ceph-mon[80135]: from='client.14274 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 20:41:35 compute-1 ceph-mon[80135]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 23 20:41:35 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:35 compute-1 ceph-mon[80135]: Saving service ingress.rgw.default spec with placement count:2
Nov 23 20:41:35 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:35 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.f scrub starts
Nov 23 20:41:36 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.f scrub ok
Nov 23 20:41:36 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:41:36 compute-1 ceph-mon[80135]: 4.12 scrub starts
Nov 23 20:41:36 compute-1 ceph-mon[80135]: 4.12 scrub ok
Nov 23 20:41:36 compute-1 ceph-mon[80135]: 2.b scrub starts
Nov 23 20:41:36 compute-1 ceph-mon[80135]: 2.b scrub ok
Nov 23 20:41:36 compute-1 ceph-mon[80135]: pgmap v92: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 20:41:37 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 23 20:41:37 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 23 20:41:37 compute-1 ceph-mon[80135]: 5.12 scrub starts
Nov 23 20:41:37 compute-1 ceph-mon[80135]: 5.12 scrub ok
Nov 23 20:41:37 compute-1 ceph-mon[80135]: 2.f scrub starts
Nov 23 20:41:37 compute-1 ceph-mon[80135]: 2.f scrub ok
Nov 23 20:41:37 compute-1 ceph-mon[80135]: 4.14 scrub starts
Nov 23 20:41:37 compute-1 ceph-mon[80135]: 4.14 scrub ok
Nov 23 20:41:37 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:37 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:37 compute-1 ceph-mon[80135]: from='client.14280 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 20:41:37 compute-1 ceph-mon[80135]: Saving service node-exporter spec with placement *
Nov 23 20:41:37 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:37 compute-1 ceph-mon[80135]: Saving service grafana spec with placement compute-0;count:1
Nov 23 20:41:37 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:37 compute-1 ceph-mon[80135]: Saving service prometheus spec with placement compute-0;count:1
Nov 23 20:41:37 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:37 compute-1 ceph-mon[80135]: Saving service alertmanager spec with placement compute-0;count:1
Nov 23 20:41:37 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:37 compute-1 ceph-mon[80135]: pgmap v93: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 20:41:38 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Nov 23 20:41:38 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Nov 23 20:41:38 compute-1 ceph-mon[80135]: 2.11 scrub starts
Nov 23 20:41:38 compute-1 ceph-mon[80135]: 2.11 scrub ok
Nov 23 20:41:38 compute-1 ceph-mon[80135]: 5.14 deep-scrub starts
Nov 23 20:41:38 compute-1 ceph-mon[80135]: 5.14 deep-scrub ok
Nov 23 20:41:38 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/209025710' entity='client.admin' 
Nov 23 20:41:38 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3810457862' entity='client.admin' 
Nov 23 20:41:39 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.14 deep-scrub starts
Nov 23 20:41:39 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.14 deep-scrub ok
Nov 23 20:41:39 compute-1 sudo[80477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 20:41:39 compute-1 sudo[80477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:39 compute-1 sudo[80477]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:39 compute-1 sudo[80502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:41:39 compute-1 sudo[80502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:39 compute-1 sudo[80502]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:39 compute-1 sudo[80527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 23 20:41:39 compute-1 sudo[80527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:39 compute-1 podman[80625]: 2025-11-23 20:41:39.955304609 +0000 UTC m=+0.055523464 container exec e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 20:41:39 compute-1 ceph-mon[80135]: 2.12 scrub starts
Nov 23 20:41:39 compute-1 ceph-mon[80135]: 2.12 scrub ok
Nov 23 20:41:39 compute-1 ceph-mon[80135]: 3.12 scrub starts
Nov 23 20:41:39 compute-1 ceph-mon[80135]: 3.12 scrub ok
Nov 23 20:41:39 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:39 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:39 compute-1 ceph-mon[80135]: pgmap v94: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 20:41:39 compute-1 ceph-mon[80135]: from='osd.2 [v2:192.168.122.102:6800/530987644,v1:192.168.122.102:6801/530987644]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 23 20:41:39 compute-1 ceph-mon[80135]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 23 20:41:39 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1043924838' entity='client.admin' 
Nov 23 20:41:40 compute-1 podman[80625]: 2025-11-23 20:41:40.043544464 +0000 UTC m=+0.143763229 container exec_died e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 23 20:41:40 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Nov 23 20:41:40 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Nov 23 20:41:40 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e31 e31: 3 total, 2 up, 3 in
Nov 23 20:41:40 compute-1 sudo[80527]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:40 compute-1 sudo[80712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:41:40 compute-1 sudo[80712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:40 compute-1 sudo[80712]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:40 compute-1 sudo[80737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 20:41:40 compute-1 sudo[80737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:40 compute-1 sudo[80737]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:40 compute-1 ceph-mon[80135]: 2.14 deep-scrub starts
Nov 23 20:41:40 compute-1 ceph-mon[80135]: 2.14 deep-scrub ok
Nov 23 20:41:40 compute-1 ceph-mon[80135]: 5.17 scrub starts
Nov 23 20:41:40 compute-1 ceph-mon[80135]: 5.17 scrub ok
Nov 23 20:41:40 compute-1 ceph-mon[80135]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 23 20:41:40 compute-1 ceph-mon[80135]: osdmap e31: 3 total, 2 up, 3 in
Nov 23 20:41:40 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 20:41:40 compute-1 ceph-mon[80135]: from='osd.2 [v2:192.168.122.102:6800/530987644,v1:192.168.122.102:6801/530987644]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 23 20:41:40 compute-1 ceph-mon[80135]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 23 20:41:40 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:40 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:40 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:40 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:41 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Nov 23 20:41:41 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Nov 23 20:41:41 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e32 e32: 3 total, 2 up, 3 in
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[4.1f( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.840518951s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.969696045s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.18( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.045534134s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 active pruub 78.174789429s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[3.15( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.839921951s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.969192505s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[4.1f( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.840518951s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969696045s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.18( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.045534134s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174789429s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[3.15( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.839921951s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969192505s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[4.15( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.839769363s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.969139099s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.12( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.045314789s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 active pruub 78.174690247s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[4.15( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.839769363s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969139099s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.f( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.045099258s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 active pruub 78.174659729s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.f( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.045099258s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174659729s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[4.9( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.839397430s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.969032288s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[4.9( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.839397430s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969032288s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[3.11( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.839385986s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.969039917s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[4.8( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.839334488s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.969017029s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[3.e( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.839289665s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.969032288s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[3.11( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.839385986s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969039917s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[4.8( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.839334488s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969017029s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.b( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.044927597s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 active pruub 78.174743652s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[3.e( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.839289665s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969032288s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.b( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.044927597s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174743652s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[5.4( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.838732719s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.968681335s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[5.4( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.838732719s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968681335s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.5( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.044606209s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 active pruub 78.174667358s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.5( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.044606209s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174667358s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.12( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.045314789s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174690247s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[4.1( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.838173866s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.968399048s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[5.e( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.838168144s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.968414307s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[5.e( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.838168144s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968414307s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[3.1a( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.838021278s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.968292236s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[4.1( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.838173866s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968399048s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[3.1a( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.838021278s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968292236s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[3.9( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.838040352s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.968322754s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[3.1d( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.837761879s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.968147278s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[3.9( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.838040352s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968322754s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[3.1d( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.837761879s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968147278s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.1c( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.043560028s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 active pruub 78.174041748s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[5.1a( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.838843346s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.969337463s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[5.1a( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.838843346s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969337463s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.1c( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.043560028s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174041748s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.1d( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.021019936s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 active pruub 78.151596069s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:41 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.1d( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.021019936s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.151596069s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:41 compute-1 sudo[80816]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqckbumbsbqserfksgbcctzfvzmqmxtc ; /usr/bin/python3'
Nov 23 20:41:41 compute-1 sudo[80816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:41:41 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:41:41 compute-1 python3[80818]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:41:41 compute-1 sudo[80816]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:42 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Nov 23 20:41:42 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Nov 23 20:41:42 compute-1 ceph-mon[80135]: 2.16 scrub starts
Nov 23 20:41:42 compute-1 ceph-mon[80135]: 2.16 scrub ok
Nov 23 20:41:42 compute-1 ceph-mon[80135]: 4.16 scrub starts
Nov 23 20:41:42 compute-1 ceph-mon[80135]: 4.16 scrub ok
Nov 23 20:41:42 compute-1 ceph-mon[80135]: pgmap v96: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 20:41:42 compute-1 ceph-mon[80135]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Nov 23 20:41:42 compute-1 ceph-mon[80135]: osdmap e32: 3 total, 2 up, 3 in
Nov 23 20:41:42 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 20:41:42 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 20:41:42 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1122996363' entity='client.admin' 
Nov 23 20:41:43 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Nov 23 20:41:43 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Nov 23 20:41:43 compute-1 ceph-mon[80135]: purged_snaps scrub starts
Nov 23 20:41:43 compute-1 ceph-mon[80135]: purged_snaps scrub ok
Nov 23 20:41:43 compute-1 ceph-mon[80135]: 2.17 scrub starts
Nov 23 20:41:43 compute-1 ceph-mon[80135]: 2.17 scrub ok
Nov 23 20:41:43 compute-1 ceph-mon[80135]: 4.17 deep-scrub starts
Nov 23 20:41:43 compute-1 ceph-mon[80135]: 4.17 deep-scrub ok
Nov 23 20:41:43 compute-1 ceph-mon[80135]: 2.1a scrub starts
Nov 23 20:41:43 compute-1 ceph-mon[80135]: 2.1a scrub ok
Nov 23 20:41:43 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 20:41:43 compute-1 ceph-mon[80135]: 5.8 scrub starts
Nov 23 20:41:43 compute-1 ceph-mon[80135]: 5.8 scrub ok
Nov 23 20:41:43 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1003409241' entity='client.admin' 
Nov 23 20:41:43 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 20:41:44 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.1b deep-scrub starts
Nov 23 20:41:44 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.1b deep-scrub ok
Nov 23 20:41:44 compute-1 ceph-mon[80135]: 3.1c scrub starts
Nov 23 20:41:44 compute-1 ceph-mon[80135]: 3.1c scrub ok
Nov 23 20:41:44 compute-1 ceph-mon[80135]: pgmap v98: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 20:41:44 compute-1 ceph-mon[80135]: 5.a scrub starts
Nov 23 20:41:44 compute-1 ceph-mon[80135]: 5.a scrub ok
Nov 23 20:41:44 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 20:41:44 compute-1 sudo[80832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 23 20:41:44 compute-1 sudo[80832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:44 compute-1 sudo[80832]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:44 compute-1 sudo[80857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph
Nov 23 20:41:44 compute-1 sudo[80857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:44 compute-1 sudo[80857]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:44 compute-1 sudo[80882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.conf.new
Nov 23 20:41:44 compute-1 sudo[80882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:44 compute-1 sudo[80882]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:44 compute-1 sudo[80907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:41:44 compute-1 sudo[80907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:44 compute-1 sudo[80907]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:44 compute-1 sudo[80932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.conf.new
Nov 23 20:41:44 compute-1 sudo[80932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:44 compute-1 sudo[80932]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:44 compute-1 sudo[80980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.conf.new
Nov 23 20:41:44 compute-1 sudo[80980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:44 compute-1 sudo[80980]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:44 compute-1 sudo[81005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.conf.new
Nov 23 20:41:44 compute-1 sudo[81005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:44 compute-1 sudo[81005]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:45 compute-1 sudo[81030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 23 20:41:45 compute-1 sudo[81030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:45 compute-1 sudo[81030]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:45 compute-1 sudo[81055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config
Nov 23 20:41:45 compute-1 sudo[81055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:45 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 23 20:41:45 compute-1 sudo[81055]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:45 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 23 20:41:45 compute-1 sudo[81080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config
Nov 23 20:41:45 compute-1 sudo[81080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:45 compute-1 sudo[81080]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:45 compute-1 sudo[81105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf.new
Nov 23 20:41:45 compute-1 sudo[81105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:45 compute-1 sudo[81105]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:45 compute-1 sudo[81130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:41:45 compute-1 sudo[81130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:45 compute-1 sudo[81130]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:45 compute-1 sudo[81155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf.new
Nov 23 20:41:45 compute-1 sudo[81155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:45 compute-1 sudo[81155]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:45 compute-1 sudo[81203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf.new
Nov 23 20:41:45 compute-1 sudo[81203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:45 compute-1 sudo[81203]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:45 compute-1 sudo[81228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf.new
Nov 23 20:41:45 compute-1 sudo[81228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:45 compute-1 sudo[81228]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:45 compute-1 ceph-mon[80135]: 4.1b deep-scrub starts
Nov 23 20:41:45 compute-1 ceph-mon[80135]: 4.1b deep-scrub ok
Nov 23 20:41:45 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:45 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1887137413' entity='client.admin' 
Nov 23 20:41:45 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:45 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 23 20:41:45 compute-1 ceph-mon[80135]: Adjusting osd_memory_target on compute-2 to 128.0M
Nov 23 20:41:45 compute-1 ceph-mon[80135]: Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 23 20:41:45 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:41:45 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 20:41:45 compute-1 ceph-mon[80135]: Updating compute-0:/etc/ceph/ceph.conf
Nov 23 20:41:45 compute-1 ceph-mon[80135]: Updating compute-1:/etc/ceph/ceph.conf
Nov 23 20:41:45 compute-1 ceph-mon[80135]: Updating compute-2:/etc/ceph/ceph.conf
Nov 23 20:41:45 compute-1 ceph-mon[80135]: 4.b scrub starts
Nov 23 20:41:45 compute-1 ceph-mon[80135]: 4.b scrub ok
Nov 23 20:41:45 compute-1 ceph-mon[80135]: Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 20:41:45 compute-1 ceph-mon[80135]: Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 20:41:45 compute-1 ceph-mon[80135]: pgmap v99: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 23 20:41:45 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 20:41:45 compute-1 ceph-mon[80135]: Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 20:41:45 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1515026058' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 23 20:41:45 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e33 e33: 3 total, 3 up, 3 in
Nov 23 20:41:45 compute-1 sudo[81253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf.new /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 20:41:45 compute-1 sudo[81253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:45 compute-1 sudo[81253]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.18( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.617355347s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174789429s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.18( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.617315769s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174789429s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[4.1f( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.412119865s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969696045s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[4.15( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.411488533s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969139099s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[4.15( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.411454201s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969139099s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.12( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.616773129s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174690247s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.12( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.616761684s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174690247s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[3.15( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.411209106s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969192505s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[3.15( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.411164284s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969192505s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[3.11( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.410974503s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969039917s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[3.11( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.410963058s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969039917s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.f( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.616515636s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174659729s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[3.e( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.410875320s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969032288s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[3.e( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.410862923s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969032288s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[4.9( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.410815239s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969032288s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[4.9( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.410793304s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969032288s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[4.8( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.410750389s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969017029s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[4.8( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.410736084s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969017029s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[4.1f( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.412009239s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969696045s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[5.4( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.410283089s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968681335s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.b( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.616336823s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174743652s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[5.4( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.410268784s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968681335s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.b( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.616302013s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174743652s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.5( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.615995407s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174667358s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.f( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.616497517s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174659729s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.5( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.615975380s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174667358s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[4.1( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.409164429s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968399048s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[4.1( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.409146309s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968399048s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[3.9( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.408976555s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968322754s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[3.9( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.408958435s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968322754s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[3.1a( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.408830643s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968292236s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[5.e( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.408941269s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968414307s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[3.1d( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.408670425s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968147278s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[3.1d( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.408656120s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968147278s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[3.1a( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.408802986s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968292236s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[5.e( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.408891678s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968414307s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.1d( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.592020512s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.151596069s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.1d( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.592005253s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.151596069s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[5.1a( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.409728050s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969337463s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[5.1a( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.409710884s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969337463s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.1c( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.614380836s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174041748s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:41:45 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.1c( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.614360809s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174041748s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:41:46 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Nov 23 20:41:46 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Nov 23 20:41:46 compute-1 ceph-mon[80135]: OSD bench result of 9936.100737 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 23 20:41:46 compute-1 ceph-mon[80135]: 5.18 scrub starts
Nov 23 20:41:46 compute-1 ceph-mon[80135]: 5.18 scrub ok
Nov 23 20:41:46 compute-1 ceph-mon[80135]: osd.2 [v2:192.168.122.102:6800/530987644,v1:192.168.122.102:6801/530987644] boot
Nov 23 20:41:46 compute-1 ceph-mon[80135]: osdmap e33: 3 total, 3 up, 3 in
Nov 23 20:41:46 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 20:41:46 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1515026058' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 23 20:41:46 compute-1 ceph-mon[80135]: mgrmap e12: compute-0.oyehye(active, since 2m), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 20:41:46 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:46 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:46 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:46 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:46 compute-1 ceph-mon[80135]: 3.b deep-scrub starts
Nov 23 20:41:46 compute-1 ceph-mon[80135]: 3.b deep-scrub ok
Nov 23 20:41:46 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:46 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:46 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:46 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 20:41:46 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 20:41:46 compute-1 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:41:46 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1621977935' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 23 20:41:46 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e34 e34: 3 total, 3 up, 3 in
Nov 23 20:41:46 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:41:47 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.c scrub starts
Nov 23 20:41:47 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.c scrub ok
Nov 23 20:41:47 compute-1 ceph-mgr[80441]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 23 20:41:47 compute-1 ceph-mgr[80441]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 23 20:41:47 compute-1 ceph-mgr[80441]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 23 20:41:47 compute-1 ceph-mgr[80441]: mgr respawn  1: '-n'
Nov 23 20:41:47 compute-1 ceph-mgr[80441]: mgr respawn  2: 'mgr.compute-1.kgyerp'
Nov 23 20:41:47 compute-1 ceph-mgr[80441]: mgr respawn  3: '-f'
Nov 23 20:41:47 compute-1 ceph-mgr[80441]: mgr respawn  4: '--setuser'
Nov 23 20:41:47 compute-1 ceph-mgr[80441]: mgr respawn  5: 'ceph'
Nov 23 20:41:47 compute-1 ceph-mgr[80441]: mgr respawn  6: '--setgroup'
Nov 23 20:41:47 compute-1 ceph-mgr[80441]: mgr respawn  7: 'ceph'
Nov 23 20:41:47 compute-1 ceph-mgr[80441]: mgr respawn  8: '--default-log-to-file=false'
Nov 23 20:41:47 compute-1 ceph-mgr[80441]: mgr respawn  9: '--default-log-to-journald=true'
Nov 23 20:41:47 compute-1 ceph-mgr[80441]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 23 20:41:47 compute-1 ceph-mgr[80441]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 23 20:41:47 compute-1 ceph-mgr[80441]: mgr respawn  exe_path /proc/self/exe
Nov 23 20:41:47 compute-1 sshd-session[72864]: Connection closed by 192.168.122.100 port 56520
Nov 23 20:41:47 compute-1 sshd-session[72835]: Connection closed by 192.168.122.100 port 56514
Nov 23 20:41:47 compute-1 sshd-session[72922]: Connection closed by 192.168.122.100 port 56534
Nov 23 20:41:47 compute-1 sshd-session[72893]: Connection closed by 192.168.122.100 port 56528
Nov 23 20:41:47 compute-1 sshd-session[72949]: Connection closed by 192.168.122.100 port 56546
Nov 23 20:41:47 compute-1 sshd-session[72978]: Connection closed by 192.168.122.100 port 56552
Nov 23 20:41:47 compute-1 sshd-session[72806]: Connection closed by 192.168.122.100 port 58274
Nov 23 20:41:47 compute-1 sshd-session[72719]: Connection closed by 192.168.122.100 port 58254
Nov 23 20:41:47 compute-1 sshd-session[72777]: Connection closed by 192.168.122.100 port 58270
Nov 23 20:41:47 compute-1 sshd-session[72690]: Connection closed by 192.168.122.100 port 58250
Nov 23 20:41:47 compute-1 sshd-session[72689]: Connection closed by 192.168.122.100 port 58242
Nov 23 20:41:47 compute-1 sshd-session[72748]: Connection closed by 192.168.122.100 port 58262
Nov 23 20:41:47 compute-1 systemd[1]: session-30.scope: Deactivated successfully.
Nov 23 20:41:47 compute-1 sshd-session[72919]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 23 20:41:47 compute-1 sshd-session[72890]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 23 20:41:47 compute-1 sshd-session[72684]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 23 20:41:47 compute-1 sshd-session[72716]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 23 20:41:47 compute-1 systemd[1]: session-22.scope: Deactivated successfully.
Nov 23 20:41:47 compute-1 sshd-session[72803]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 23 20:41:47 compute-1 sshd-session[72832]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 23 20:41:47 compute-1 systemd[1]: session-29.scope: Deactivated successfully.
Nov 23 20:41:47 compute-1 sshd-session[72774]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 23 20:41:47 compute-1 sshd-session[72975]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 23 20:41:47 compute-1 sshd-session[72861]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 23 20:41:47 compute-1 systemd[1]: session-28.scope: Deactivated successfully.
Nov 23 20:41:47 compute-1 systemd[1]: session-23.scope: Deactivated successfully.
Nov 23 20:41:47 compute-1 systemd-logind[793]: Session 30 logged out. Waiting for processes to exit.
Nov 23 20:41:47 compute-1 systemd[1]: session-27.scope: Deactivated successfully.
Nov 23 20:41:47 compute-1 systemd[1]: session-25.scope: Deactivated successfully.
Nov 23 20:41:47 compute-1 systemd[1]: session-26.scope: Deactivated successfully.
Nov 23 20:41:47 compute-1 systemd[1]: session-32.scope: Deactivated successfully.
Nov 23 20:41:47 compute-1 systemd[1]: session-32.scope: Consumed 59.443s CPU time.
Nov 23 20:41:47 compute-1 systemd-logind[793]: Session 29 logged out. Waiting for processes to exit.
Nov 23 20:41:47 compute-1 systemd-logind[793]: Session 22 logged out. Waiting for processes to exit.
Nov 23 20:41:47 compute-1 systemd-logind[793]: Session 28 logged out. Waiting for processes to exit.
Nov 23 20:41:47 compute-1 systemd-logind[793]: Session 25 logged out. Waiting for processes to exit.
Nov 23 20:41:47 compute-1 systemd-logind[793]: Session 27 logged out. Waiting for processes to exit.
Nov 23 20:41:47 compute-1 systemd-logind[793]: Session 23 logged out. Waiting for processes to exit.
Nov 23 20:41:47 compute-1 systemd-logind[793]: Session 26 logged out. Waiting for processes to exit.
Nov 23 20:41:47 compute-1 systemd-logind[793]: Session 32 logged out. Waiting for processes to exit.
Nov 23 20:41:47 compute-1 sshd-session[72667]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 23 20:41:47 compute-1 sshd-session[72745]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 23 20:41:47 compute-1 systemd[1]: session-20.scope: Deactivated successfully.
Nov 23 20:41:47 compute-1 sshd-session[72946]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 23 20:41:47 compute-1 systemd-logind[793]: Removed session 30.
Nov 23 20:41:47 compute-1 systemd[1]: session-31.scope: Deactivated successfully.
Nov 23 20:41:47 compute-1 systemd[1]: session-24.scope: Deactivated successfully.
Nov 23 20:41:47 compute-1 systemd-logind[793]: Session 20 logged out. Waiting for processes to exit.
Nov 23 20:41:47 compute-1 systemd-logind[793]: Session 24 logged out. Waiting for processes to exit.
Nov 23 20:41:47 compute-1 systemd-logind[793]: Session 31 logged out. Waiting for processes to exit.
Nov 23 20:41:47 compute-1 systemd-logind[793]: Removed session 22.
Nov 23 20:41:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: ignoring --setuser ceph since I am not root
Nov 23 20:41:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: ignoring --setgroup ceph since I am not root
Nov 23 20:41:47 compute-1 systemd-logind[793]: Removed session 29.
Nov 23 20:41:47 compute-1 systemd-logind[793]: Removed session 28.
Nov 23 20:41:47 compute-1 systemd-logind[793]: Removed session 23.
Nov 23 20:41:47 compute-1 systemd-logind[793]: Removed session 27.
Nov 23 20:41:47 compute-1 systemd-logind[793]: Removed session 25.
Nov 23 20:41:47 compute-1 systemd-logind[793]: Removed session 26.
Nov 23 20:41:47 compute-1 ceph-mgr[80441]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 23 20:41:47 compute-1 systemd-logind[793]: Removed session 32.
Nov 23 20:41:47 compute-1 ceph-mgr[80441]: pidfile_write: ignore empty --pid-file
Nov 23 20:41:47 compute-1 systemd-logind[793]: Removed session 20.
Nov 23 20:41:47 compute-1 systemd-logind[793]: Removed session 31.
Nov 23 20:41:47 compute-1 systemd-logind[793]: Removed session 24.
Nov 23 20:41:47 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'alerts'
Nov 23 20:41:47 compute-1 ceph-mgr[80441]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 20:41:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:47.503+0000 7fb2bbba8140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 20:41:47 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'balancer'
Nov 23 20:41:47 compute-1 ceph-mgr[80441]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 20:41:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:47.590+0000 7fb2bbba8140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 20:41:47 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'cephadm'
Nov 23 20:41:47 compute-1 ceph-mon[80135]: 5.1c scrub starts
Nov 23 20:41:47 compute-1 ceph-mon[80135]: 5.1c scrub ok
Nov 23 20:41:47 compute-1 ceph-mon[80135]: osdmap e34: 3 total, 3 up, 3 in
Nov 23 20:41:47 compute-1 ceph-mon[80135]: 5.c scrub starts
Nov 23 20:41:47 compute-1 ceph-mon[80135]: 5.c scrub ok
Nov 23 20:41:47 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1621977935' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 23 20:41:47 compute-1 ceph-mon[80135]: mgrmap e13: compute-0.oyehye(active, since 2m), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 20:41:48 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.18 deep-scrub starts
Nov 23 20:41:48 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.18 deep-scrub ok
Nov 23 20:41:48 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'crash'
Nov 23 20:41:48 compute-1 ceph-mgr[80441]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 20:41:48 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:48.378+0000 7fb2bbba8140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 20:41:48 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'dashboard'
Nov 23 20:41:48 compute-1 ceph-mon[80135]: 4.c scrub starts
Nov 23 20:41:48 compute-1 ceph-mon[80135]: 4.c scrub ok
Nov 23 20:41:48 compute-1 ceph-mon[80135]: 4.1f scrub starts
Nov 23 20:41:48 compute-1 ceph-mon[80135]: 4.1f scrub ok
Nov 23 20:41:48 compute-1 ceph-mon[80135]: 5.6 deep-scrub starts
Nov 23 20:41:48 compute-1 ceph-mon[80135]: 5.6 deep-scrub ok
Nov 23 20:41:48 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'devicehealth'
Nov 23 20:41:49 compute-1 ceph-mgr[80441]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 20:41:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:49.037+0000 7fb2bbba8140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 20:41:49 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'diskprediction_local'
Nov 23 20:41:49 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 23 20:41:49 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 23 20:41:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 23 20:41:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 23 20:41:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]:   from numpy import show_config as show_numpy_config
Nov 23 20:41:49 compute-1 ceph-mgr[80441]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 20:41:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:49.208+0000 7fb2bbba8140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 20:41:49 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'influx'
Nov 23 20:41:49 compute-1 ceph-mgr[80441]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 20:41:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:49.279+0000 7fb2bbba8140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 20:41:49 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'insights'
Nov 23 20:41:49 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'iostat'
Nov 23 20:41:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:49.418+0000 7fb2bbba8140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 20:41:49 compute-1 ceph-mgr[80441]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 20:41:49 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'k8sevents'
Nov 23 20:41:49 compute-1 ceph-mon[80135]: 4.18 deep-scrub starts
Nov 23 20:41:49 compute-1 ceph-mon[80135]: 4.18 deep-scrub ok
Nov 23 20:41:49 compute-1 ceph-mon[80135]: 4.8 scrub starts
Nov 23 20:41:49 compute-1 ceph-mon[80135]: 4.7 scrub starts
Nov 23 20:41:49 compute-1 ceph-mon[80135]: 4.8 scrub ok
Nov 23 20:41:49 compute-1 ceph-mon[80135]: 4.7 scrub ok
Nov 23 20:41:49 compute-1 ceph-mon[80135]: 4.9 scrub starts
Nov 23 20:41:49 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'localpool'
Nov 23 20:41:49 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'mds_autoscaler'
Nov 23 20:41:50 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'mirroring'
Nov 23 20:41:50 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 23 20:41:50 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 23 20:41:50 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'nfs'
Nov 23 20:41:50 compute-1 ceph-mgr[80441]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 20:41:50 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'orchestrator'
Nov 23 20:41:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:50.437+0000 7fb2bbba8140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 20:41:50 compute-1 ceph-mon[80135]: 5.f scrub starts
Nov 23 20:41:50 compute-1 ceph-mon[80135]: 5.f scrub ok
Nov 23 20:41:50 compute-1 ceph-mon[80135]: 4.9 scrub ok
Nov 23 20:41:50 compute-1 ceph-mon[80135]: 4.0 scrub starts
Nov 23 20:41:50 compute-1 ceph-mon[80135]: 4.0 scrub ok
Nov 23 20:41:50 compute-1 ceph-mon[80135]: 5.4 scrub starts
Nov 23 20:41:50 compute-1 ceph-mon[80135]: 5.4 scrub ok
Nov 23 20:41:50 compute-1 ceph-mgr[80441]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 20:41:50 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'osd_perf_query'
Nov 23 20:41:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:50.644+0000 7fb2bbba8140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 20:41:50 compute-1 ceph-mgr[80441]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 20:41:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:50.716+0000 7fb2bbba8140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 20:41:50 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'osd_support'
Nov 23 20:41:50 compute-1 ceph-mgr[80441]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 20:41:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:50.782+0000 7fb2bbba8140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 20:41:50 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'pg_autoscaler'
Nov 23 20:41:50 compute-1 ceph-mgr[80441]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 20:41:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:50.864+0000 7fb2bbba8140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 20:41:50 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'progress'
Nov 23 20:41:50 compute-1 ceph-mgr[80441]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 20:41:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:50.940+0000 7fb2bbba8140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 20:41:50 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'prometheus'
Nov 23 20:41:51 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 23 20:41:51 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 23 20:41:51 compute-1 ceph-mgr[80441]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 20:41:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:51.307+0000 7fb2bbba8140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 20:41:51 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'rbd_support'
Nov 23 20:41:51 compute-1 ceph-mgr[80441]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 20:41:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:51.407+0000 7fb2bbba8140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 20:41:51 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'restful'
Nov 23 20:41:51 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'rgw'
Nov 23 20:41:51 compute-1 ceph-mon[80135]: 5.1 scrub starts
Nov 23 20:41:51 compute-1 ceph-mon[80135]: 5.1 scrub ok
Nov 23 20:41:51 compute-1 ceph-mon[80135]: 3.7 scrub starts
Nov 23 20:41:51 compute-1 ceph-mon[80135]: 3.7 scrub ok
Nov 23 20:41:51 compute-1 ceph-mon[80135]: 3.e scrub starts
Nov 23 20:41:51 compute-1 ceph-mon[80135]: 3.e scrub ok
Nov 23 20:41:51 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:41:51 compute-1 ceph-mgr[80441]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 20:41:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:51.831+0000 7fb2bbba8140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 20:41:51 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'rook'
Nov 23 20:41:52 compute-1 sshd-session[81309]: Invalid user user1 from 43.225.142.116 port 47952
Nov 23 20:41:52 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Nov 23 20:41:52 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Nov 23 20:41:52 compute-1 sshd-session[81309]: Received disconnect from 43.225.142.116 port 47952:11: Bye Bye [preauth]
Nov 23 20:41:52 compute-1 sshd-session[81309]: Disconnected from invalid user user1 43.225.142.116 port 47952 [preauth]
Nov 23 20:41:52 compute-1 ceph-mgr[80441]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 20:41:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:52.405+0000 7fb2bbba8140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 20:41:52 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'selftest'
Nov 23 20:41:52 compute-1 ceph-mgr[80441]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 20:41:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:52.477+0000 7fb2bbba8140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 20:41:52 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'snap_schedule'
Nov 23 20:41:52 compute-1 ceph-mgr[80441]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 20:41:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:52.558+0000 7fb2bbba8140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 20:41:52 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'stats'
Nov 23 20:41:52 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'status'
Nov 23 20:41:52 compute-1 ceph-mon[80135]: 4.e scrub starts
Nov 23 20:41:52 compute-1 ceph-mon[80135]: 4.e scrub ok
Nov 23 20:41:52 compute-1 ceph-mon[80135]: 3.6 scrub starts
Nov 23 20:41:52 compute-1 ceph-mon[80135]: 3.6 scrub ok
Nov 23 20:41:52 compute-1 ceph-mgr[80441]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 20:41:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:52.719+0000 7fb2bbba8140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 20:41:52 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'telegraf'
Nov 23 20:41:52 compute-1 ceph-mgr[80441]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 20:41:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:52.789+0000 7fb2bbba8140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 20:41:52 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'telemetry'
Nov 23 20:41:52 compute-1 ceph-mgr[80441]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 20:41:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:52.945+0000 7fb2bbba8140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 20:41:52 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'test_orchestrator'
Nov 23 20:41:53 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Nov 23 20:41:53 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Nov 23 20:41:53 compute-1 ceph-mgr[80441]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 20:41:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:53.161+0000 7fb2bbba8140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 20:41:53 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'volumes'
Nov 23 20:41:53 compute-1 ceph-mgr[80441]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 20:41:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:53.427+0000 7fb2bbba8140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 20:41:53 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'zabbix'
Nov 23 20:41:53 compute-1 ceph-mgr[80441]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 20:41:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:53.495+0000 7fb2bbba8140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 20:41:53 compute-1 ceph-mgr[80441]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 20:41:53 compute-1 ceph-mgr[80441]: mgr load Constructed class from module: dashboard
Nov 23 20:41:53 compute-1 ceph-mgr[80441]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Nov 23 20:41:53 compute-1 ceph-mgr[80441]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 23 20:41:53 compute-1 ceph-mgr[80441]: [dashboard INFO root] Starting engine...
Nov 23 20:41:53 compute-1 ceph-mgr[80441]: ms_deliver_dispatch: unhandled message 0x55bab5a41860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Nov 23 20:41:53 compute-1 ceph-mgr[80441]: [dashboard INFO root] Engine started...
Nov 23 20:41:53 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e35 e35: 3 total, 3 up, 3 in
Nov 23 20:41:53 compute-1 ceph-mon[80135]: 5.1b scrub starts
Nov 23 20:41:53 compute-1 ceph-mon[80135]: 5.1b scrub ok
Nov 23 20:41:53 compute-1 ceph-mon[80135]: 3.1a deep-scrub starts
Nov 23 20:41:53 compute-1 ceph-mon[80135]: 5.3 scrub starts
Nov 23 20:41:53 compute-1 ceph-mon[80135]: 3.1a deep-scrub ok
Nov 23 20:41:53 compute-1 ceph-mon[80135]: 5.3 scrub ok
Nov 23 20:41:53 compute-1 ceph-mon[80135]: Standby manager daemon compute-2.jtkauz restarted
Nov 23 20:41:53 compute-1 ceph-mon[80135]: Standby manager daemon compute-2.jtkauz started
Nov 23 20:41:53 compute-1 ceph-mon[80135]: Active manager daemon compute-0.oyehye restarted
Nov 23 20:41:53 compute-1 ceph-mon[80135]: Activating manager daemon compute-0.oyehye
Nov 23 20:41:53 compute-1 ceph-mon[80135]: 4.1 deep-scrub starts
Nov 23 20:41:53 compute-1 ceph-mon[80135]: 4.1 deep-scrub ok
Nov 23 20:41:54 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 23 20:41:54 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 23 20:41:54 compute-1 sshd-session[81323]: Accepted publickey for ceph-admin from 192.168.122.100 port 40090 ssh2: RSA SHA256:ArvGVmp8+2uP4nDr4YVQ5KKtNyaQTjQGpGKaK12sPrI
Nov 23 20:41:54 compute-1 systemd-logind[793]: New session 33 of user ceph-admin.
Nov 23 20:41:54 compute-1 systemd[1]: Started Session 33 of User ceph-admin.
Nov 23 20:41:54 compute-1 sshd-session[81323]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 23 20:41:54 compute-1 sudo[81327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:41:54 compute-1 sudo[81327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:54 compute-1 sudo[81327]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:54 compute-1 sudo[81352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 23 20:41:54 compute-1 sudo[81352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:54 compute-1 ceph-mon[80135]: 3.3 scrub starts
Nov 23 20:41:54 compute-1 ceph-mon[80135]: 3.3 scrub ok
Nov 23 20:41:54 compute-1 ceph-mon[80135]: 3.1 scrub starts
Nov 23 20:41:54 compute-1 ceph-mon[80135]: 3.1 scrub ok
Nov 23 20:41:54 compute-1 ceph-mon[80135]: osdmap e35: 3 total, 3 up, 3 in
Nov 23 20:41:54 compute-1 ceph-mon[80135]: Standby manager daemon compute-1.kgyerp restarted
Nov 23 20:41:54 compute-1 ceph-mon[80135]: Standby manager daemon compute-1.kgyerp started
Nov 23 20:41:54 compute-1 ceph-mon[80135]: mgrmap e14: compute-0.oyehye(active, starting, since 0.252391s), standbys: compute-1.kgyerp, compute-2.jtkauz
Nov 23 20:41:54 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 23 20:41:54 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 20:41:54 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 23 20:41:54 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr metadata", "who": "compute-0.oyehye", "id": "compute-0.oyehye"}]: dispatch
Nov 23 20:41:54 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr metadata", "who": "compute-1.kgyerp", "id": "compute-1.kgyerp"}]: dispatch
Nov 23 20:41:54 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr metadata", "who": "compute-2.jtkauz", "id": "compute-2.jtkauz"}]: dispatch
Nov 23 20:41:54 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 20:41:54 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 20:41:54 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 20:41:54 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 23 20:41:54 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 23 20:41:54 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 23 20:41:54 compute-1 ceph-mon[80135]: Manager daemon compute-0.oyehye is now available
Nov 23 20:41:54 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/mirror_snapshot_schedule"}]: dispatch
Nov 23 20:41:54 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/trash_purge_schedule"}]: dispatch
Nov 23 20:41:54 compute-1 ceph-mon[80135]: 4.15 scrub starts
Nov 23 20:41:54 compute-1 ceph-mon[80135]: 4.15 scrub ok
Nov 23 20:41:54 compute-1 podman[81448]: 2025-11-23 20:41:54.926674067 +0000 UTC m=+0.057897730 container exec e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 20:41:55 compute-1 podman[81448]: 2025-11-23 20:41:55.021831859 +0000 UTC m=+0.153055522 container exec_died e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 23 20:41:55 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 23 20:41:55 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 23 20:41:55 compute-1 sudo[81352]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:55 compute-1 sudo[81534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:41:55 compute-1 sudo[81534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:55 compute-1 sudo[81534]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:55 compute-1 sudo[81559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 20:41:55 compute-1 sudo[81559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:55 compute-1 ceph-mon[80135]: 4.1a scrub starts
Nov 23 20:41:55 compute-1 ceph-mon[80135]: 4.1a scrub ok
Nov 23 20:41:55 compute-1 ceph-mon[80135]: 3.2 scrub starts
Nov 23 20:41:55 compute-1 ceph-mon[80135]: 3.2 scrub ok
Nov 23 20:41:55 compute-1 ceph-mon[80135]: from='client.14343 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 20:41:55 compute-1 ceph-mon[80135]: mgrmap e15: compute-0.oyehye(active, since 1.2741s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 20:41:55 compute-1 ceph-mon[80135]: pgmap v3: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:41:55 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:55 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:55 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:55 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:55 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:55 compute-1 ceph-mon[80135]: 2.18 scrub starts
Nov 23 20:41:55 compute-1 ceph-mon[80135]: 2.18 scrub ok
Nov 23 20:41:55 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:55 compute-1 sudo[81559]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:56 compute-1 sudo[81615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:41:56 compute-1 sudo[81615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:56 compute-1 sudo[81615]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:56 compute-1 sudo[81640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Nov 23 20:41:56 compute-1 sudo[81640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:56 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Nov 23 20:41:56 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Nov 23 20:41:56 compute-1 sudo[81640]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:56 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:41:56 compute-1 ceph-mon[80135]: 5.7 scrub starts
Nov 23 20:41:56 compute-1 ceph-mon[80135]: 5.7 scrub ok
Nov 23 20:41:56 compute-1 ceph-mon[80135]: 5.5 scrub starts
Nov 23 20:41:56 compute-1 ceph-mon[80135]: 5.5 scrub ok
Nov 23 20:41:56 compute-1 ceph-mon[80135]: [23/Nov/2025:20:41:55] ENGINE Bus STARTING
Nov 23 20:41:56 compute-1 ceph-mon[80135]: pgmap v4: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:41:56 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:56 compute-1 ceph-mon[80135]: [23/Nov/2025:20:41:55] ENGINE Serving on https://192.168.122.100:7150
Nov 23 20:41:56 compute-1 ceph-mon[80135]: [23/Nov/2025:20:41:55] ENGINE Client ('192.168.122.100', 34418) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 23 20:41:56 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:56 compute-1 ceph-mon[80135]: [23/Nov/2025:20:41:55] ENGINE Serving on http://192.168.122.100:8765
Nov 23 20:41:56 compute-1 ceph-mon[80135]: [23/Nov/2025:20:41:55] ENGINE Bus STARTED
Nov 23 20:41:56 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:56 compute-1 ceph-mon[80135]: mgrmap e16: compute-0.oyehye(active, since 2s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 20:41:56 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:56 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 23 20:41:56 compute-1 ceph-mon[80135]: Adjusting osd_memory_target on compute-0 to 127.9M
Nov 23 20:41:56 compute-1 ceph-mon[80135]: Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Nov 23 20:41:56 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:56 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:56 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 23 20:41:56 compute-1 ceph-mon[80135]: 5.e scrub starts
Nov 23 20:41:56 compute-1 ceph-mon[80135]: 5.e scrub ok
Nov 23 20:41:56 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:57 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 23 20:41:57 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 23 20:41:57 compute-1 sudo[81684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 23 20:41:57 compute-1 sudo[81684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:57 compute-1 sudo[81684]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:57 compute-1 sudo[81709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph
Nov 23 20:41:57 compute-1 sudo[81709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:57 compute-1 sudo[81709]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:57 compute-1 sudo[81734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.conf.new
Nov 23 20:41:57 compute-1 sudo[81734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:57 compute-1 sudo[81734]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:57 compute-1 sudo[81759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:41:57 compute-1 sudo[81759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:57 compute-1 sudo[81759]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:57 compute-1 sudo[81784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.conf.new
Nov 23 20:41:57 compute-1 sudo[81784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:57 compute-1 sudo[81784]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:57 compute-1 sudo[81832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.conf.new
Nov 23 20:41:57 compute-1 sudo[81832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:57 compute-1 sudo[81832]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:57 compute-1 sudo[81857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.conf.new
Nov 23 20:41:57 compute-1 sudo[81857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:57 compute-1 sudo[81857]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:57 compute-1 sudo[81882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 23 20:41:57 compute-1 sudo[81882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:57 compute-1 sudo[81882]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:57 compute-1 ceph-mon[80135]: 5.2 scrub starts
Nov 23 20:41:57 compute-1 ceph-mon[80135]: 5.2 scrub ok
Nov 23 20:41:57 compute-1 ceph-mon[80135]: Adjusting osd_memory_target on compute-1 to 128.0M
Nov 23 20:41:57 compute-1 ceph-mon[80135]: Unable to set osd_memory_target on compute-1 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 23 20:41:57 compute-1 ceph-mon[80135]: 4.4 deep-scrub starts
Nov 23 20:41:57 compute-1 ceph-mon[80135]: 4.4 deep-scrub ok
Nov 23 20:41:57 compute-1 ceph-mon[80135]: from='client.14376 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 20:41:57 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:57 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:57 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 23 20:41:57 compute-1 ceph-mon[80135]: Adjusting osd_memory_target on compute-2 to 128.0M
Nov 23 20:41:57 compute-1 ceph-mon[80135]: Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 23 20:41:57 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:41:57 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 20:41:57 compute-1 ceph-mon[80135]: Updating compute-0:/etc/ceph/ceph.conf
Nov 23 20:41:57 compute-1 ceph-mon[80135]: Updating compute-1:/etc/ceph/ceph.conf
Nov 23 20:41:57 compute-1 ceph-mon[80135]: Updating compute-2:/etc/ceph/ceph.conf
Nov 23 20:41:57 compute-1 ceph-mon[80135]: 3.9 scrub starts
Nov 23 20:41:57 compute-1 ceph-mon[80135]: 3.9 scrub ok
Nov 23 20:41:57 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:57 compute-1 sudo[81907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config
Nov 23 20:41:57 compute-1 sudo[81907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:57 compute-1 sudo[81907]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:57 compute-1 sudo[81932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config
Nov 23 20:41:57 compute-1 sudo[81932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:57 compute-1 sudo[81932]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:58 compute-1 sudo[81957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf.new
Nov 23 20:41:58 compute-1 sudo[81957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:58 compute-1 sudo[81957]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:58 compute-1 sudo[81982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:41:58 compute-1 sudo[81982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:58 compute-1 sudo[81982]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:58 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 23 20:41:58 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 23 20:41:58 compute-1 sudo[82007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf.new
Nov 23 20:41:58 compute-1 sudo[82007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:58 compute-1 sudo[82007]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:58 compute-1 sudo[82055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf.new
Nov 23 20:41:58 compute-1 sudo[82055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:58 compute-1 sudo[82055]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:58 compute-1 sudo[82080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf.new
Nov 23 20:41:58 compute-1 sudo[82080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:58 compute-1 sudo[82080]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:58 compute-1 sudo[82105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf.new /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 20:41:58 compute-1 sudo[82105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:58 compute-1 sudo[82105]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:58 compute-1 sudo[82130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 23 20:41:58 compute-1 sudo[82130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:58 compute-1 sudo[82130]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:58 compute-1 sudo[82155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph
Nov 23 20:41:58 compute-1 sudo[82155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:58 compute-1 sudo[82155]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:58 compute-1 sudo[82180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.client.admin.keyring.new
Nov 23 20:41:58 compute-1 sudo[82180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:58 compute-1 sudo[82180]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:58 compute-1 sudo[82205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:41:58 compute-1 sudo[82205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:58 compute-1 sudo[82205]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:58 compute-1 sudo[82230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.client.admin.keyring.new
Nov 23 20:41:58 compute-1 sudo[82230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:58 compute-1 sudo[82230]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:58 compute-1 sudo[82278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.client.admin.keyring.new
Nov 23 20:41:58 compute-1 sudo[82278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:58 compute-1 sudo[82278]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:58 compute-1 sudo[82303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.client.admin.keyring.new
Nov 23 20:41:58 compute-1 sudo[82303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:58 compute-1 sudo[82303]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:58 compute-1 sudo[82328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 23 20:41:58 compute-1 sudo[82328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:58 compute-1 ceph-mon[80135]: 3.5 scrub starts
Nov 23 20:41:58 compute-1 ceph-mon[80135]: 3.5 scrub ok
Nov 23 20:41:58 compute-1 ceph-mon[80135]: 3.4 scrub starts
Nov 23 20:41:58 compute-1 ceph-mon[80135]: 3.4 scrub ok
Nov 23 20:41:58 compute-1 ceph-mon[80135]: from='client.24161 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 20:41:58 compute-1 ceph-mon[80135]: pgmap v5: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:41:58 compute-1 ceph-mon[80135]: Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 20:41:58 compute-1 ceph-mon[80135]: Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 20:41:58 compute-1 ceph-mon[80135]: Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 20:41:58 compute-1 ceph-mon[80135]: mgrmap e17: compute-0.oyehye(active, since 4s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 20:41:58 compute-1 ceph-mon[80135]: 3.1d scrub starts
Nov 23 20:41:58 compute-1 ceph-mon[80135]: 3.1d scrub ok
Nov 23 20:41:58 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 20:41:58 compute-1 sudo[82328]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:58 compute-1 sudo[82353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config
Nov 23 20:41:58 compute-1 sudo[82353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:58 compute-1 sudo[82353]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:59 compute-1 sudo[82378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config
Nov 23 20:41:59 compute-1 sudo[82378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:59 compute-1 sudo[82378]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:59 compute-1 sudo[82403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring.new
Nov 23 20:41:59 compute-1 sudo[82403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:59 compute-1 sudo[82403]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:59 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.d deep-scrub starts
Nov 23 20:41:59 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.d deep-scrub ok
Nov 23 20:41:59 compute-1 sudo[82428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:41:59 compute-1 sudo[82428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:59 compute-1 sudo[82428]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:59 compute-1 sudo[82453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring.new
Nov 23 20:41:59 compute-1 sudo[82453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:59 compute-1 sudo[82453]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:59 compute-1 sudo[82501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring.new
Nov 23 20:41:59 compute-1 sudo[82501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:59 compute-1 sudo[82501]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:59 compute-1 sudo[82526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring.new
Nov 23 20:41:59 compute-1 sudo[82526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:59 compute-1 sudo[82526]: pam_unix(sudo:session): session closed for user root
Nov 23 20:41:59 compute-1 sudo[82551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring.new /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 20:41:59 compute-1 sudo[82551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:41:59 compute-1 sudo[82551]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:00 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 23 20:42:00 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 23 20:42:00 compute-1 ceph-mgr[80441]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 23 20:42:00 compute-1 ceph-mgr[80441]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 23 20:42:00 compute-1 ceph-mgr[80441]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 23 20:42:00 compute-1 ceph-mgr[80441]: mgr respawn  1: '-n'
Nov 23 20:42:00 compute-1 ceph-mgr[80441]: mgr respawn  2: 'mgr.compute-1.kgyerp'
Nov 23 20:42:00 compute-1 ceph-mgr[80441]: mgr respawn  3: '-f'
Nov 23 20:42:00 compute-1 ceph-mgr[80441]: mgr respawn  4: '--setuser'
Nov 23 20:42:00 compute-1 ceph-mgr[80441]: mgr respawn  5: 'ceph'
Nov 23 20:42:00 compute-1 ceph-mgr[80441]: mgr respawn  6: '--setgroup'
Nov 23 20:42:00 compute-1 ceph-mgr[80441]: mgr respawn  7: 'ceph'
Nov 23 20:42:00 compute-1 ceph-mgr[80441]: mgr respawn  8: '--default-log-to-file=false'
Nov 23 20:42:00 compute-1 ceph-mgr[80441]: mgr respawn  9: '--default-log-to-journald=true'
Nov 23 20:42:00 compute-1 ceph-mgr[80441]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 23 20:42:00 compute-1 ceph-mgr[80441]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 23 20:42:00 compute-1 ceph-mgr[80441]: mgr respawn  exe_path /proc/self/exe
Nov 23 20:42:00 compute-1 sshd-session[81326]: Connection closed by 192.168.122.100 port 40090
Nov 23 20:42:00 compute-1 sshd-session[81323]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 23 20:42:00 compute-1 systemd[1]: session-33.scope: Deactivated successfully.
Nov 23 20:42:00 compute-1 systemd[1]: session-33.scope: Consumed 4.381s CPU time.
Nov 23 20:42:00 compute-1 systemd-logind[793]: Session 33 logged out. Waiting for processes to exit.
Nov 23 20:42:00 compute-1 systemd-logind[793]: Removed session 33.
Nov 23 20:42:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: ignoring --setuser ceph since I am not root
Nov 23 20:42:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: ignoring --setgroup ceph since I am not root
Nov 23 20:42:00 compute-1 ceph-mon[80135]: 4.5 scrub starts
Nov 23 20:42:00 compute-1 ceph-mon[80135]: 4.5 scrub ok
Nov 23 20:42:00 compute-1 ceph-mon[80135]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 23 20:42:00 compute-1 ceph-mon[80135]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 23 20:42:00 compute-1 ceph-mon[80135]: 4.f scrub starts
Nov 23 20:42:00 compute-1 ceph-mon[80135]: 4.f scrub ok
Nov 23 20:42:00 compute-1 ceph-mon[80135]: from='client.14388 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 20:42:00 compute-1 ceph-mon[80135]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 23 20:42:00 compute-1 ceph-mon[80135]: Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 20:42:00 compute-1 ceph-mon[80135]: Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 20:42:00 compute-1 ceph-mon[80135]: 5.1a scrub starts
Nov 23 20:42:00 compute-1 ceph-mon[80135]: 5.1a scrub ok
Nov 23 20:42:00 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/319512723' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 23 20:42:00 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:00 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:00 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:00 compute-1 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:00 compute-1 ceph-mgr[80441]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 23 20:42:00 compute-1 ceph-mgr[80441]: pidfile_write: ignore empty --pid-file
Nov 23 20:42:00 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'alerts'
Nov 23 20:42:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:00.361+0000 7f8ca9229140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 20:42:00 compute-1 ceph-mgr[80441]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 20:42:00 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'balancer'
Nov 23 20:42:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:00.460+0000 7f8ca9229140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 20:42:00 compute-1 ceph-mgr[80441]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 20:42:00 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'cephadm'
Nov 23 20:42:01 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 23 20:42:01 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 23 20:42:01 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'crash'
Nov 23 20:42:01 compute-1 ceph-mon[80135]: 3.d deep-scrub starts
Nov 23 20:42:01 compute-1 ceph-mon[80135]: 3.d deep-scrub ok
Nov 23 20:42:01 compute-1 ceph-mon[80135]: 5.1d scrub starts
Nov 23 20:42:01 compute-1 ceph-mon[80135]: 5.1d scrub ok
Nov 23 20:42:01 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/319512723' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 23 20:42:01 compute-1 ceph-mon[80135]: 4.a scrub starts
Nov 23 20:42:01 compute-1 ceph-mon[80135]: mgrmap e18: compute-0.oyehye(active, since 6s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 20:42:01 compute-1 ceph-mon[80135]: 4.a scrub ok
Nov 23 20:42:01 compute-1 ceph-mon[80135]: 5.19 scrub starts
Nov 23 20:42:01 compute-1 ceph-mon[80135]: 2.15 scrub starts
Nov 23 20:42:01 compute-1 ceph-mon[80135]: 5.19 scrub ok
Nov 23 20:42:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:01.277+0000 7f8ca9229140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 20:42:01 compute-1 ceph-mgr[80441]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 20:42:01 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'dashboard'
Nov 23 20:42:01 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:42:01 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'devicehealth'
Nov 23 20:42:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:01.899+0000 7f8ca9229140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 20:42:01 compute-1 ceph-mgr[80441]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 20:42:01 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'diskprediction_local'
Nov 23 20:42:02 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 23 20:42:02 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 23 20:42:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 23 20:42:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 23 20:42:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]:   from numpy import show_config as show_numpy_config
Nov 23 20:42:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:02.073+0000 7f8ca9229140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 20:42:02 compute-1 ceph-mgr[80441]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 20:42:02 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'influx'
Nov 23 20:42:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:02.143+0000 7f8ca9229140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 20:42:02 compute-1 ceph-mgr[80441]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 20:42:02 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'insights'
Nov 23 20:42:02 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'iostat'
Nov 23 20:42:02 compute-1 ceph-mon[80135]: 2.15 scrub ok
Nov 23 20:42:02 compute-1 ceph-mon[80135]: 4.d scrub starts
Nov 23 20:42:02 compute-1 ceph-mon[80135]: 4.d scrub ok
Nov 23 20:42:02 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2985907711' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 23 20:42:02 compute-1 ceph-mon[80135]: 2.13 scrub starts
Nov 23 20:42:02 compute-1 ceph-mon[80135]: 2.13 scrub ok
Nov 23 20:42:02 compute-1 ceph-mon[80135]: 3.1f scrub starts
Nov 23 20:42:02 compute-1 ceph-mon[80135]: 3.1f scrub ok
Nov 23 20:42:02 compute-1 ceph-mon[80135]: 3.a scrub starts
Nov 23 20:42:02 compute-1 ceph-mon[80135]: 3.a scrub ok
Nov 23 20:42:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:02.289+0000 7f8ca9229140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 20:42:02 compute-1 ceph-mgr[80441]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 20:42:02 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'k8sevents'
Nov 23 20:42:02 compute-1 sshd-session[82607]: Invalid user user2 from 102.176.81.29 port 42870
Nov 23 20:42:02 compute-1 sshd-session[82607]: Received disconnect from 102.176.81.29 port 42870:11: Bye Bye [preauth]
Nov 23 20:42:02 compute-1 sshd-session[82607]: Disconnected from invalid user user2 102.176.81.29 port 42870 [preauth]
Nov 23 20:42:02 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'localpool'
Nov 23 20:42:02 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'mds_autoscaler'
Nov 23 20:42:02 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'mirroring'
Nov 23 20:42:03 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 23 20:42:03 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 23 20:42:03 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'nfs'
Nov 23 20:42:03 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2985907711' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 23 20:42:03 compute-1 ceph-mon[80135]: mgrmap e19: compute-0.oyehye(active, since 8s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 20:42:03 compute-1 ceph-mon[80135]: 3.1e deep-scrub starts
Nov 23 20:42:03 compute-1 ceph-mon[80135]: 3.1e deep-scrub ok
Nov 23 20:42:03 compute-1 ceph-mon[80135]: 3.c scrub starts
Nov 23 20:42:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:03.317+0000 7f8ca9229140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 20:42:03 compute-1 ceph-mgr[80441]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 20:42:03 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'orchestrator'
Nov 23 20:42:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:03.542+0000 7f8ca9229140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 20:42:03 compute-1 ceph-mgr[80441]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 20:42:03 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'osd_perf_query'
Nov 23 20:42:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:03.620+0000 7f8ca9229140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 20:42:03 compute-1 ceph-mgr[80441]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 20:42:03 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'osd_support'
Nov 23 20:42:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:03.690+0000 7f8ca9229140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 20:42:03 compute-1 ceph-mgr[80441]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 20:42:03 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'pg_autoscaler'
Nov 23 20:42:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:03.770+0000 7f8ca9229140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 20:42:03 compute-1 ceph-mgr[80441]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 20:42:03 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'progress'
Nov 23 20:42:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:03.842+0000 7f8ca9229140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 20:42:03 compute-1 ceph-mgr[80441]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 20:42:03 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'prometheus'
Nov 23 20:42:04 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 23 20:42:04 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 23 20:42:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:04.186+0000 7f8ca9229140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 20:42:04 compute-1 ceph-mgr[80441]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 20:42:04 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'rbd_support'
Nov 23 20:42:04 compute-1 ceph-mon[80135]: 2.c scrub starts
Nov 23 20:42:04 compute-1 ceph-mon[80135]: 2.c scrub ok
Nov 23 20:42:04 compute-1 ceph-mon[80135]: 3.c scrub ok
Nov 23 20:42:04 compute-1 ceph-mon[80135]: 2.19 scrub starts
Nov 23 20:42:04 compute-1 ceph-mon[80135]: 2.19 scrub ok
Nov 23 20:42:04 compute-1 ceph-mon[80135]: 5.9 scrub starts
Nov 23 20:42:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:04.286+0000 7f8ca9229140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 20:42:04 compute-1 ceph-mgr[80441]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 20:42:04 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'restful'
Nov 23 20:42:04 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'rgw'
Nov 23 20:42:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:04.726+0000 7f8ca9229140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 20:42:04 compute-1 ceph-mgr[80441]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 20:42:04 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'rook'
Nov 23 20:42:04 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Nov 23 20:42:04 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Nov 23 20:42:05 compute-1 ceph-mon[80135]: 5.b scrub starts
Nov 23 20:42:05 compute-1 ceph-mon[80135]: 5.b scrub ok
Nov 23 20:42:05 compute-1 ceph-mon[80135]: 5.9 scrub ok
Nov 23 20:42:05 compute-1 ceph-mon[80135]: 2.e scrub starts
Nov 23 20:42:05 compute-1 ceph-mon[80135]: 2.e scrub ok
Nov 23 20:42:05 compute-1 ceph-mon[80135]: 5.16 scrub starts
Nov 23 20:42:05 compute-1 ceph-mon[80135]: 5.16 scrub ok
Nov 23 20:42:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:05.286+0000 7f8ca9229140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 20:42:05 compute-1 ceph-mgr[80441]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 20:42:05 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'selftest'
Nov 23 20:42:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:05.361+0000 7f8ca9229140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 20:42:05 compute-1 ceph-mgr[80441]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 20:42:05 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'snap_schedule'
Nov 23 20:42:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:05.447+0000 7f8ca9229140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 20:42:05 compute-1 ceph-mgr[80441]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 20:42:05 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'stats'
Nov 23 20:42:05 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'status'
Nov 23 20:42:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:05.601+0000 7f8ca9229140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 20:42:05 compute-1 ceph-mgr[80441]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 20:42:05 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'telegraf'
Nov 23 20:42:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:05.674+0000 7f8ca9229140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 20:42:05 compute-1 ceph-mgr[80441]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 20:42:05 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'telemetry'
Nov 23 20:42:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:05.845+0000 7f8ca9229140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 20:42:05 compute-1 ceph-mgr[80441]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 20:42:05 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'test_orchestrator'
Nov 23 20:42:05 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Nov 23 20:42:05 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Nov 23 20:42:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:06.111+0000 7f8ca9229140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 20:42:06 compute-1 ceph-mgr[80441]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 20:42:06 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'volumes'
Nov 23 20:42:06 compute-1 ceph-mon[80135]: 5.d scrub starts
Nov 23 20:42:06 compute-1 ceph-mon[80135]: 5.d scrub ok
Nov 23 20:42:06 compute-1 ceph-mon[80135]: 2.1 scrub starts
Nov 23 20:42:06 compute-1 ceph-mon[80135]: 2.1 scrub ok
Nov 23 20:42:06 compute-1 ceph-mon[80135]: 3.10 scrub starts
Nov 23 20:42:06 compute-1 ceph-mon[80135]: Standby manager daemon compute-2.jtkauz restarted
Nov 23 20:42:06 compute-1 ceph-mon[80135]: Standby manager daemon compute-2.jtkauz started
Nov 23 20:42:06 compute-1 ceph-mon[80135]: 3.10 scrub ok
Nov 23 20:42:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:06.397+0000 7f8ca9229140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 20:42:06 compute-1 ceph-mgr[80441]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 20:42:06 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'zabbix'
Nov 23 20:42:06 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e36 e36: 3 total, 3 up, 3 in
Nov 23 20:42:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:06.469+0000 7f8ca9229140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 20:42:06 compute-1 ceph-mgr[80441]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 20:42:06 compute-1 ceph-mgr[80441]: ms_deliver_dispatch: unhandled message 0x556b54a8d860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Nov 23 20:42:06 compute-1 ceph-mgr[80441]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 23 20:42:06 compute-1 ceph-mgr[80441]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 23 20:42:06 compute-1 ceph-mgr[80441]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 23 20:42:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: ignoring --setuser ceph since I am not root
Nov 23 20:42:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: ignoring --setgroup ceph since I am not root
Nov 23 20:42:06 compute-1 ceph-mgr[80441]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 23 20:42:06 compute-1 ceph-mgr[80441]: pidfile_write: ignore empty --pid-file
Nov 23 20:42:06 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'alerts'
Nov 23 20:42:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:06.695+0000 7f1f16187140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 20:42:06 compute-1 ceph-mgr[80441]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 20:42:06 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'balancer'
Nov 23 20:42:06 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:42:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:06.774+0000 7f1f16187140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 20:42:06 compute-1 ceph-mgr[80441]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 20:42:06 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'cephadm'
Nov 23 20:42:06 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Nov 23 20:42:07 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Nov 23 20:42:07 compute-1 ceph-mon[80135]: 2.d scrub starts
Nov 23 20:42:07 compute-1 ceph-mon[80135]: 2.d scrub ok
Nov 23 20:42:07 compute-1 ceph-mon[80135]: mgrmap e20: compute-0.oyehye(active, since 12s), standbys: compute-1.kgyerp, compute-2.jtkauz
Nov 23 20:42:07 compute-1 ceph-mon[80135]: Active manager daemon compute-0.oyehye restarted
Nov 23 20:42:07 compute-1 ceph-mon[80135]: Activating manager daemon compute-0.oyehye
Nov 23 20:42:07 compute-1 ceph-mon[80135]: osdmap e36: 3 total, 3 up, 3 in
Nov 23 20:42:07 compute-1 ceph-mon[80135]: mgrmap e21: compute-0.oyehye(active, starting, since 0.0342255s), standbys: compute-1.kgyerp, compute-2.jtkauz
Nov 23 20:42:07 compute-1 ceph-mon[80135]: Standby manager daemon compute-1.kgyerp restarted
Nov 23 20:42:07 compute-1 ceph-mon[80135]: Standby manager daemon compute-1.kgyerp started
Nov 23 20:42:07 compute-1 ceph-mon[80135]: 2.1f scrub starts
Nov 23 20:42:07 compute-1 ceph-mon[80135]: 2.1f scrub ok
Nov 23 20:42:07 compute-1 ceph-mon[80135]: 3.13 scrub starts
Nov 23 20:42:07 compute-1 ceph-mon[80135]: 3.13 scrub ok
Nov 23 20:42:07 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'crash'
Nov 23 20:42:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:07.587+0000 7f1f16187140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 20:42:07 compute-1 ceph-mgr[80441]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 20:42:07 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'dashboard'
Nov 23 20:42:07 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 23 20:42:07 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 23 20:42:08 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'devicehealth'
Nov 23 20:42:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:08.199+0000 7f1f16187140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 20:42:08 compute-1 ceph-mgr[80441]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 20:42:08 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'diskprediction_local'
Nov 23 20:42:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 23 20:42:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 23 20:42:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]:   from numpy import show_config as show_numpy_config
Nov 23 20:42:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:08.357+0000 7f1f16187140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 20:42:08 compute-1 ceph-mgr[80441]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 20:42:08 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'influx'
Nov 23 20:42:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:08.428+0000 7f1f16187140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 20:42:08 compute-1 ceph-mgr[80441]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 20:42:08 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'insights'
Nov 23 20:42:08 compute-1 ceph-mon[80135]: 2.a scrub starts
Nov 23 20:42:08 compute-1 ceph-mon[80135]: 2.a scrub ok
Nov 23 20:42:08 compute-1 ceph-mon[80135]: mgrmap e22: compute-0.oyehye(active, starting, since 1.04996s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 20:42:08 compute-1 ceph-mon[80135]: 5.11 scrub starts
Nov 23 20:42:08 compute-1 ceph-mon[80135]: 5.11 scrub ok
Nov 23 20:42:08 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'iostat'
Nov 23 20:42:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:08.562+0000 7f1f16187140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 20:42:08 compute-1 ceph-mgr[80441]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 20:42:08 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'k8sevents'
Nov 23 20:42:08 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'localpool'
Nov 23 20:42:08 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 23 20:42:08 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 23 20:42:09 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'mds_autoscaler'
Nov 23 20:42:09 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'mirroring'
Nov 23 20:42:09 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'nfs'
Nov 23 20:42:09 compute-1 ceph-mon[80135]: 5.0 scrub starts
Nov 23 20:42:09 compute-1 ceph-mon[80135]: 5.0 scrub ok
Nov 23 20:42:09 compute-1 ceph-mon[80135]: 4.6 scrub starts
Nov 23 20:42:09 compute-1 ceph-mon[80135]: 4.13 scrub starts
Nov 23 20:42:09 compute-1 ceph-mon[80135]: 4.13 scrub ok
Nov 23 20:42:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:09.554+0000 7f1f16187140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 20:42:09 compute-1 ceph-mgr[80441]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 20:42:09 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'orchestrator'
Nov 23 20:42:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:09.768+0000 7f1f16187140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 20:42:09 compute-1 ceph-mgr[80441]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 20:42:09 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'osd_perf_query'
Nov 23 20:42:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:09.842+0000 7f1f16187140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 20:42:09 compute-1 ceph-mgr[80441]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 20:42:09 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'osd_support'
Nov 23 20:42:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:09.910+0000 7f1f16187140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 20:42:09 compute-1 ceph-mgr[80441]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 20:42:09 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'pg_autoscaler'
Nov 23 20:42:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:09.995+0000 7f1f16187140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 20:42:09 compute-1 ceph-mgr[80441]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 20:42:09 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'progress'
Nov 23 20:42:10 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 23 20:42:10 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 23 20:42:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:10.065+0000 7f1f16187140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 20:42:10 compute-1 ceph-mgr[80441]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 20:42:10 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'prometheus'
Nov 23 20:42:10 compute-1 systemd[1]: Stopping User Manager for UID 42477...
Nov 23 20:42:10 compute-1 systemd[72671]: Activating special unit Exit the Session...
Nov 23 20:42:10 compute-1 systemd[72671]: Stopped target Main User Target.
Nov 23 20:42:10 compute-1 systemd[72671]: Stopped target Basic System.
Nov 23 20:42:10 compute-1 systemd[72671]: Stopped target Paths.
Nov 23 20:42:10 compute-1 systemd[72671]: Stopped target Sockets.
Nov 23 20:42:10 compute-1 systemd[72671]: Stopped target Timers.
Nov 23 20:42:10 compute-1 systemd[72671]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 23 20:42:10 compute-1 systemd[72671]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 23 20:42:10 compute-1 systemd[72671]: Closed D-Bus User Message Bus Socket.
Nov 23 20:42:10 compute-1 systemd[72671]: Stopped Create User's Volatile Files and Directories.
Nov 23 20:42:10 compute-1 systemd[72671]: Removed slice User Application Slice.
Nov 23 20:42:10 compute-1 systemd[72671]: Reached target Shutdown.
Nov 23 20:42:10 compute-1 systemd[72671]: Finished Exit the Session.
Nov 23 20:42:10 compute-1 systemd[72671]: Reached target Exit the Session.
Nov 23 20:42:10 compute-1 systemd[1]: user@42477.service: Deactivated successfully.
Nov 23 20:42:10 compute-1 systemd[1]: Stopped User Manager for UID 42477.
Nov 23 20:42:10 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 23 20:42:10 compute-1 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 23 20:42:10 compute-1 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 23 20:42:10 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 23 20:42:10 compute-1 systemd[1]: Removed slice User Slice of UID 42477.
Nov 23 20:42:10 compute-1 systemd[1]: user-42477.slice: Consumed 1min 5.143s CPU time.
Nov 23 20:42:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:10.428+0000 7f1f16187140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 20:42:10 compute-1 ceph-mgr[80441]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 20:42:10 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'rbd_support'
Nov 23 20:42:10 compute-1 ceph-mon[80135]: 4.6 scrub ok
Nov 23 20:42:10 compute-1 ceph-mon[80135]: 4.3 scrub starts
Nov 23 20:42:10 compute-1 ceph-mon[80135]: 4.3 scrub ok
Nov 23 20:42:10 compute-1 ceph-mon[80135]: 3.16 scrub starts
Nov 23 20:42:10 compute-1 ceph-mon[80135]: 3.16 scrub ok
Nov 23 20:42:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:10.528+0000 7f1f16187140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 20:42:10 compute-1 ceph-mgr[80441]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 20:42:10 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'restful'
Nov 23 20:42:10 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'rgw'
Nov 23 20:42:10 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 23 20:42:10 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 23 20:42:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:10.990+0000 7f1f16187140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 20:42:10 compute-1 ceph-mgr[80441]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 20:42:10 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'rook'
Nov 23 20:42:11 compute-1 ceph-mon[80135]: 3.0 scrub starts
Nov 23 20:42:11 compute-1 ceph-mon[80135]: 3.0 scrub ok
Nov 23 20:42:11 compute-1 ceph-mon[80135]: 5.15 scrub starts
Nov 23 20:42:11 compute-1 ceph-mon[80135]: 5.15 scrub ok
Nov 23 20:42:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:11.546+0000 7f1f16187140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 20:42:11 compute-1 ceph-mgr[80441]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 20:42:11 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'selftest'
Nov 23 20:42:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:11.615+0000 7f1f16187140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 20:42:11 compute-1 ceph-mgr[80441]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 20:42:11 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'snap_schedule'
Nov 23 20:42:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:11.692+0000 7f1f16187140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 20:42:11 compute-1 ceph-mgr[80441]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 20:42:11 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'stats'
Nov 23 20:42:11 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:42:11 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'status'
Nov 23 20:42:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:11.857+0000 7f1f16187140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 20:42:11 compute-1 ceph-mgr[80441]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 20:42:11 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'telegraf'
Nov 23 20:42:11 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Nov 23 20:42:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:11.931+0000 7f1f16187140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 20:42:11 compute-1 ceph-mgr[80441]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 20:42:11 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'telemetry'
Nov 23 20:42:11 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Nov 23 20:42:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:12.092+0000 7f1f16187140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 20:42:12 compute-1 ceph-mgr[80441]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 20:42:12 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'test_orchestrator'
Nov 23 20:42:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:12.310+0000 7f1f16187140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 20:42:12 compute-1 ceph-mgr[80441]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 20:42:12 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'volumes'
Nov 23 20:42:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:12.581+0000 7f1f16187140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 20:42:12 compute-1 ceph-mgr[80441]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 20:42:12 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'zabbix'
Nov 23 20:42:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:12.651+0000 7f1f16187140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 20:42:12 compute-1 ceph-mgr[80441]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 20:42:12 compute-1 ceph-mgr[80441]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 20:42:12 compute-1 ceph-mgr[80441]: mgr load Constructed class from module: dashboard
Nov 23 20:42:12 compute-1 ceph-mgr[80441]: ms_deliver_dispatch: unhandled message 0x55fea3e11860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Nov 23 20:42:12 compute-1 ceph-mgr[80441]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Nov 23 20:42:12 compute-1 ceph-mgr[80441]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 23 20:42:12 compute-1 ceph-mgr[80441]: [dashboard INFO root] Starting engine...
Nov 23 20:42:12 compute-1 ceph-mon[80135]: 2.10 deep-scrub starts
Nov 23 20:42:12 compute-1 ceph-mon[80135]: 2.10 deep-scrub ok
Nov 23 20:42:12 compute-1 ceph-mon[80135]: Standby manager daemon compute-2.jtkauz restarted
Nov 23 20:42:12 compute-1 ceph-mon[80135]: Standby manager daemon compute-2.jtkauz started
Nov 23 20:42:12 compute-1 ceph-mon[80135]: 5.1f scrub starts
Nov 23 20:42:12 compute-1 ceph-mon[80135]: 5.1f scrub ok
Nov 23 20:42:12 compute-1 ceph-mgr[80441]: [dashboard INFO root] Engine started...
Nov 23 20:42:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e37 e37: 3 total, 3 up, 3 in
Nov 23 20:42:12 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Nov 23 20:42:12 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Nov 23 20:42:13 compute-1 sshd-session[82654]: Accepted publickey for ceph-admin from 192.168.122.100 port 32952 ssh2: RSA SHA256:ArvGVmp8+2uP4nDr4YVQ5KKtNyaQTjQGpGKaK12sPrI
Nov 23 20:42:13 compute-1 systemd-logind[793]: New session 34 of user ceph-admin.
Nov 23 20:42:13 compute-1 systemd[1]: Created slice User Slice of UID 42477.
Nov 23 20:42:13 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 23 20:42:13 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 23 20:42:13 compute-1 systemd[1]: Starting User Manager for UID 42477...
Nov 23 20:42:13 compute-1 systemd[82658]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 23 20:42:13 compute-1 systemd[82658]: Queued start job for default target Main User Target.
Nov 23 20:42:13 compute-1 systemd[82658]: Created slice User Application Slice.
Nov 23 20:42:13 compute-1 systemd[82658]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 23 20:42:13 compute-1 systemd[82658]: Started Daily Cleanup of User's Temporary Directories.
Nov 23 20:42:13 compute-1 systemd[82658]: Reached target Paths.
Nov 23 20:42:13 compute-1 systemd[82658]: Reached target Timers.
Nov 23 20:42:13 compute-1 systemd[82658]: Starting D-Bus User Message Bus Socket...
Nov 23 20:42:13 compute-1 systemd[82658]: Starting Create User's Volatile Files and Directories...
Nov 23 20:42:13 compute-1 systemd[82658]: Listening on D-Bus User Message Bus Socket.
Nov 23 20:42:13 compute-1 systemd[82658]: Reached target Sockets.
Nov 23 20:42:13 compute-1 systemd[82658]: Finished Create User's Volatile Files and Directories.
Nov 23 20:42:13 compute-1 systemd[82658]: Reached target Basic System.
Nov 23 20:42:13 compute-1 systemd[82658]: Reached target Main User Target.
Nov 23 20:42:13 compute-1 systemd[82658]: Startup finished in 134ms.
Nov 23 20:42:13 compute-1 systemd[1]: Started User Manager for UID 42477.
Nov 23 20:42:13 compute-1 systemd[1]: Started Session 34 of User ceph-admin.
Nov 23 20:42:13 compute-1 sshd-session[82654]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 23 20:42:13 compute-1 sudo[82674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:42:13 compute-1 sudo[82674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:13 compute-1 sudo[82674]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:13 compute-1 sudo[82699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 23 20:42:13 compute-1 sudo[82699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:13 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.f scrub starts
Nov 23 20:42:14 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.f scrub ok
Nov 23 20:42:15 compute-1 ceph-mon[80135]: 3.8 scrub starts
Nov 23 20:42:15 compute-1 ceph-mon[80135]: 3.8 scrub ok
Nov 23 20:42:15 compute-1 ceph-mon[80135]: Standby manager daemon compute-1.kgyerp restarted
Nov 23 20:42:15 compute-1 ceph-mon[80135]: Standby manager daemon compute-1.kgyerp started
Nov 23 20:42:15 compute-1 ceph-mon[80135]: mgrmap e23: compute-0.oyehye(active, starting, since 6s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 20:42:15 compute-1 ceph-mon[80135]: Active manager daemon compute-0.oyehye restarted
Nov 23 20:42:15 compute-1 ceph-mon[80135]: Activating manager daemon compute-0.oyehye
Nov 23 20:42:15 compute-1 ceph-mon[80135]: osdmap e37: 3 total, 3 up, 3 in
Nov 23 20:42:15 compute-1 ceph-mon[80135]: mgrmap e24: compute-0.oyehye(active, starting, since 0.085284s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 20:42:15 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 23 20:42:15 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 20:42:15 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 23 20:42:15 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr metadata", "who": "compute-0.oyehye", "id": "compute-0.oyehye"}]: dispatch
Nov 23 20:42:15 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr metadata", "who": "compute-2.jtkauz", "id": "compute-2.jtkauz"}]: dispatch
Nov 23 20:42:15 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr metadata", "who": "compute-1.kgyerp", "id": "compute-1.kgyerp"}]: dispatch
Nov 23 20:42:15 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 20:42:15 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 20:42:15 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 20:42:15 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 23 20:42:15 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 23 20:42:15 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 23 20:42:15 compute-1 ceph-mon[80135]: Manager daemon compute-0.oyehye is now available
Nov 23 20:42:15 compute-1 ceph-mon[80135]: 5.10 scrub starts
Nov 23 20:42:15 compute-1 ceph-mon[80135]: 5.10 scrub ok
Nov 23 20:42:15 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/mirror_snapshot_schedule"}]: dispatch
Nov 23 20:42:15 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/trash_purge_schedule"}]: dispatch
Nov 23 20:42:15 compute-1 podman[82795]: 2025-11-23 20:42:15.34406769 +0000 UTC m=+1.174523850 container exec e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 20:42:15 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).mds e2 new map
Nov 23 20:42:15 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).mds e2 print_map
                                           e2
                                           btime 2025-11-23T20:42:15:389935+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-23T20:42:15.389822+0000
                                           modified        2025-11-23T20:42:15.389822+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Nov 23 20:42:15 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e38 e38: 3 total, 3 up, 3 in
Nov 23 20:42:15 compute-1 podman[82795]: 2025-11-23 20:42:15.454088889 +0000 UTC m=+1.284545029 container exec_died e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 23 20:42:15 compute-1 sudo[82699]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:15 compute-1 sudo[82882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:42:15 compute-1 sudo[82882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:15 compute-1 sudo[82882]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:15 compute-1 sudo[82907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 20:42:15 compute-1 sudo[82907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:16 compute-1 ceph-mon[80135]: 4.19 scrub starts
Nov 23 20:42:16 compute-1 ceph-mon[80135]: 4.19 scrub ok
Nov 23 20:42:16 compute-1 ceph-mon[80135]: 3.f scrub starts
Nov 23 20:42:16 compute-1 ceph-mon[80135]: 3.f scrub ok
Nov 23 20:42:16 compute-1 ceph-mon[80135]: [23/Nov/2025:20:42:14] ENGINE Bus STARTING
Nov 23 20:42:16 compute-1 ceph-mon[80135]: [23/Nov/2025:20:42:14] ENGINE Serving on http://192.168.122.100:8765
Nov 23 20:42:16 compute-1 ceph-mon[80135]: 4.2 scrub starts
Nov 23 20:42:16 compute-1 ceph-mon[80135]: 4.2 scrub ok
Nov 23 20:42:16 compute-1 ceph-mon[80135]: [23/Nov/2025:20:42:14] ENGINE Serving on https://192.168.122.100:7150
Nov 23 20:42:16 compute-1 ceph-mon[80135]: [23/Nov/2025:20:42:14] ENGINE Bus STARTED
Nov 23 20:42:16 compute-1 ceph-mon[80135]: [23/Nov/2025:20:42:14] ENGINE Client ('192.168.122.100', 49202) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 23 20:42:16 compute-1 ceph-mon[80135]: from='client.14418 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 20:42:16 compute-1 ceph-mon[80135]: mgrmap e25: compute-0.oyehye(active, since 2s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 20:42:16 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 23 20:42:16 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 23 20:42:16 compute-1 ceph-mon[80135]: pgmap v3: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:42:16 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 23 20:42:16 compute-1 ceph-mon[80135]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 23 20:42:16 compute-1 ceph-mon[80135]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 23 20:42:16 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:16 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 23 20:42:16 compute-1 ceph-mon[80135]: osdmap e38: 3 total, 3 up, 3 in
Nov 23 20:42:16 compute-1 ceph-mon[80135]: fsmap cephfs:0
Nov 23 20:42:16 compute-1 ceph-mon[80135]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 23 20:42:16 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:16 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:16 compute-1 ceph-mon[80135]: 3.1b scrub starts
Nov 23 20:42:16 compute-1 ceph-mon[80135]: 3.1b scrub ok
Nov 23 20:42:16 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:16 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:16 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:16 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:16 compute-1 sudo[82907]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:16 compute-1 sudo[82963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:42:16 compute-1 sudo[82963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:16 compute-1 sudo[82963]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:16 compute-1 sudo[82988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Nov 23 20:42:16 compute-1 sudo[82988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:16 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:42:16 compute-1 sudo[82988]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:17 compute-1 sudo[83031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 23 20:42:17 compute-1 sudo[83031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:17 compute-1 sudo[83031]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:17 compute-1 sudo[83056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph
Nov 23 20:42:17 compute-1 sudo[83056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:17 compute-1 sudo[83056]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:17 compute-1 sudo[83081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.conf.new
Nov 23 20:42:17 compute-1 sudo[83081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:17 compute-1 sudo[83081]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:17 compute-1 sudo[83106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:42:17 compute-1 sudo[83106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:17 compute-1 sudo[83106]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:17 compute-1 sudo[83131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.conf.new
Nov 23 20:42:17 compute-1 sudo[83131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:17 compute-1 sudo[83131]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:17 compute-1 sudo[83179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.conf.new
Nov 23 20:42:17 compute-1 sudo[83179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:17 compute-1 sudo[83179]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:17 compute-1 ceph-mon[80135]: mgrmap e26: compute-0.oyehye(active, since 3s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 20:42:17 compute-1 ceph-mon[80135]: from='client.14460 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 20:42:17 compute-1 ceph-mon[80135]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 23 20:42:17 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:17 compute-1 ceph-mon[80135]: 4.1d scrub starts
Nov 23 20:42:17 compute-1 ceph-mon[80135]: 4.1d scrub ok
Nov 23 20:42:17 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:17 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:17 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 23 20:42:17 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:17 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:17 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 23 20:42:17 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:17 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:17 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 23 20:42:17 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:42:17 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 20:42:17 compute-1 sudo[83204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.conf.new
Nov 23 20:42:17 compute-1 sudo[83204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:17 compute-1 sudo[83204]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:17 compute-1 sudo[83229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 23 20:42:17 compute-1 sudo[83229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:17 compute-1 sudo[83229]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:17 compute-1 sudo[83254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config
Nov 23 20:42:17 compute-1 sudo[83254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:17 compute-1 sudo[83254]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:17 compute-1 sudo[83279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config
Nov 23 20:42:17 compute-1 sudo[83279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:17 compute-1 sudo[83279]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:17 compute-1 sudo[83304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf.new
Nov 23 20:42:17 compute-1 sudo[83304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:17 compute-1 sudo[83304]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:17 compute-1 sudo[83329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:42:17 compute-1 sudo[83329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:17 compute-1 sudo[83329]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:17 compute-1 sudo[83354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf.new
Nov 23 20:42:17 compute-1 sudo[83354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:17 compute-1 sudo[83354]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:17 compute-1 sudo[83402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf.new
Nov 23 20:42:17 compute-1 sudo[83402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:17 compute-1 sudo[83402]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:17 compute-1 sudo[83427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf.new
Nov 23 20:42:17 compute-1 sudo[83427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:17 compute-1 sudo[83427]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:17 compute-1 sudo[83452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf.new /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 20:42:17 compute-1 sudo[83452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:17 compute-1 sudo[83452]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:18 compute-1 sudo[83477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 23 20:42:18 compute-1 sudo[83477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:18 compute-1 sudo[83477]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:18 compute-1 sudo[83502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph
Nov 23 20:42:18 compute-1 sudo[83502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:18 compute-1 sudo[83502]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:18 compute-1 sudo[83527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.client.admin.keyring.new
Nov 23 20:42:18 compute-1 sudo[83527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:18 compute-1 sudo[83527]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:18 compute-1 sudo[83552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:42:18 compute-1 sudo[83552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:18 compute-1 sudo[83552]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:18 compute-1 sudo[83577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.client.admin.keyring.new
Nov 23 20:42:18 compute-1 sudo[83577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:18 compute-1 sudo[83577]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:18 compute-1 ceph-mon[80135]: Adjusting osd_memory_target on compute-1 to 128.0M
Nov 23 20:42:18 compute-1 ceph-mon[80135]: Unable to set osd_memory_target on compute-1 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 23 20:42:18 compute-1 ceph-mon[80135]: pgmap v5: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:42:18 compute-1 ceph-mon[80135]: Adjusting osd_memory_target on compute-2 to 128.0M
Nov 23 20:42:18 compute-1 ceph-mon[80135]: Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 23 20:42:18 compute-1 ceph-mon[80135]: Adjusting osd_memory_target on compute-0 to 127.9M
Nov 23 20:42:18 compute-1 ceph-mon[80135]: Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Nov 23 20:42:18 compute-1 ceph-mon[80135]: Updating compute-0:/etc/ceph/ceph.conf
Nov 23 20:42:18 compute-1 ceph-mon[80135]: Updating compute-1:/etc/ceph/ceph.conf
Nov 23 20:42:18 compute-1 ceph-mon[80135]: Updating compute-2:/etc/ceph/ceph.conf
Nov 23 20:42:18 compute-1 ceph-mon[80135]: from='client.14466 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 20:42:18 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Nov 23 20:42:18 compute-1 ceph-mon[80135]: Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 20:42:18 compute-1 ceph-mon[80135]: Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 20:42:18 compute-1 ceph-mon[80135]: 4.1c deep-scrub starts
Nov 23 20:42:18 compute-1 ceph-mon[80135]: 4.1c deep-scrub ok
Nov 23 20:42:18 compute-1 ceph-mon[80135]: mgrmap e27: compute-0.oyehye(active, since 4s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 20:42:18 compute-1 sudo[83625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.client.admin.keyring.new
Nov 23 20:42:18 compute-1 sudo[83625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:18 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e39 e39: 3 total, 3 up, 3 in
Nov 23 20:42:18 compute-1 sudo[83625]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:18 compute-1 sudo[83650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.client.admin.keyring.new
Nov 23 20:42:18 compute-1 sudo[83650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:18 compute-1 sudo[83650]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:18 compute-1 sudo[83675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 23 20:42:18 compute-1 sudo[83675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:18 compute-1 sudo[83675]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:18 compute-1 sudo[83700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config
Nov 23 20:42:18 compute-1 sudo[83700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:18 compute-1 sudo[83700]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:18 compute-1 sudo[83725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config
Nov 23 20:42:18 compute-1 sudo[83725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:18 compute-1 sudo[83725]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:18 compute-1 sudo[83750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring.new
Nov 23 20:42:18 compute-1 sudo[83750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:18 compute-1 sudo[83750]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:18 compute-1 sudo[83775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:42:18 compute-1 sudo[83775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:18 compute-1 sudo[83775]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:18 compute-1 sudo[83800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring.new
Nov 23 20:42:18 compute-1 sudo[83800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:18 compute-1 sudo[83800]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:18 compute-1 sudo[83848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring.new
Nov 23 20:42:18 compute-1 sudo[83848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:18 compute-1 sudo[83848]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:19 compute-1 sudo[83873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring.new
Nov 23 20:42:19 compute-1 sudo[83873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:19 compute-1 sudo[83873]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:19 compute-1 sudo[83898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring.new /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 20:42:19 compute-1 sudo[83898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:19 compute-1 sudo[83898]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:19 compute-1 ceph-mon[80135]: Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 20:42:19 compute-1 ceph-mon[80135]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 23 20:42:19 compute-1 ceph-mon[80135]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 23 20:42:19 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Nov 23 20:42:19 compute-1 ceph-mon[80135]: osdmap e39: 3 total, 3 up, 3 in
Nov 23 20:42:19 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Nov 23 20:42:19 compute-1 ceph-mon[80135]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 23 20:42:19 compute-1 ceph-mon[80135]: Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 20:42:19 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:19 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:19 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:19 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:19 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e40 e40: 3 total, 3 up, 3 in
Nov 23 20:42:20 compute-1 ceph-mon[80135]: Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 20:42:20 compute-1 ceph-mon[80135]: pgmap v7: 132 pgs: 1 unknown, 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:42:20 compute-1 ceph-mon[80135]: Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 20:42:20 compute-1 ceph-mon[80135]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 23 20:42:20 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Nov 23 20:42:20 compute-1 ceph-mon[80135]: osdmap e40: 3 total, 3 up, 3 in
Nov 23 20:42:20 compute-1 ceph-mon[80135]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 23 20:42:20 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:20 compute-1 ceph-mon[80135]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 23 20:42:20 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:20 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:20 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:20 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:20 compute-1 ceph-mon[80135]: Deploying daemon node-exporter.compute-0 on compute-0
Nov 23 20:42:20 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e41 e41: 3 total, 3 up, 3 in
Nov 23 20:42:21 compute-1 ceph-mon[80135]: osdmap e41: 3 total, 3 up, 3 in
Nov 23 20:42:21 compute-1 ceph-mon[80135]: pgmap v10: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 4 op/s
Nov 23 20:42:21 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:42:22 compute-1 ceph-mon[80135]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 23 20:42:22 compute-1 ceph-mon[80135]: mgrmap e28: compute-0.oyehye(active, since 8s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 20:42:22 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1678765881' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 23 20:42:22 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1678765881' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 23 20:42:22 compute-1 sudo[83923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:42:22 compute-1 sudo[83923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:22 compute-1 sudo[83923]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:22 compute-1 sudo[83948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:42:22 compute-1 sudo[83948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:23 compute-1 systemd[1]: Reloading.
Nov 23 20:42:23 compute-1 systemd-rc-local-generator[84041]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:42:23 compute-1 systemd-sysv-generator[84045]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:42:23 compute-1 systemd[1]: Reloading.
Nov 23 20:42:23 compute-1 systemd-rc-local-generator[84076]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:42:23 compute-1 systemd-sysv-generator[84080]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:42:23 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:23 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:23 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:23 compute-1 ceph-mon[80135]: Deploying daemon node-exporter.compute-1 on compute-1
Nov 23 20:42:23 compute-1 ceph-mon[80135]: pgmap v11: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 4 op/s
Nov 23 20:42:23 compute-1 systemd[1]: Starting Ceph node-exporter.compute-1 for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 20:42:23 compute-1 bash[84139]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Nov 23 20:42:24 compute-1 bash[84139]: Getting image source signatures
Nov 23 20:42:24 compute-1 bash[84139]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Nov 23 20:42:24 compute-1 bash[84139]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Nov 23 20:42:24 compute-1 bash[84139]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Nov 23 20:42:24 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3890036027' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 23 20:42:24 compute-1 bash[84139]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Nov 23 20:42:24 compute-1 bash[84139]: Writing manifest to image destination
Nov 23 20:42:24 compute-1 podman[84139]: 2025-11-23 20:42:24.992524169 +0000 UTC m=+1.056002415 container create 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 20:42:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c505063cb26b444322d1f6be8db3eb38f4e56399e30c28b7dad8c73418e9a0dc/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Nov 23 20:42:25 compute-1 podman[84139]: 2025-11-23 20:42:25.047404136 +0000 UTC m=+1.110882392 container init 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 20:42:25 compute-1 podman[84139]: 2025-11-23 20:42:25.051645869 +0000 UTC m=+1.115124115 container start 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 20:42:25 compute-1 bash[84139]: 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770
Nov 23 20:42:25 compute-1 podman[84139]: 2025-11-23 20:42:24.973488836 +0000 UTC m=+1.036967112 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.058Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.058Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=arp
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=bcache
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=bonding
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=cpu
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=dmi
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=edac
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=entropy
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=filefd
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=hwmon
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=netclass
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=netdev
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=netstat
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=nfs
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=nvme
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=os
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=pressure
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=rapl
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=selinux
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=softnet
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=stat
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.060Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.060Z caller=node_exporter.go:117 level=info collector=textfile
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.060Z caller=node_exporter.go:117 level=info collector=thermal_zone
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.060Z caller=node_exporter.go:117 level=info collector=time
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.060Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.060Z caller=node_exporter.go:117 level=info collector=uname
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.060Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.060Z caller=node_exporter.go:117 level=info collector=xfs
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.060Z caller=node_exporter.go:117 level=info collector=zfs
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.060Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Nov 23 20:42:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.060Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Nov 23 20:42:25 compute-1 systemd[1]: Started Ceph node-exporter.compute-1 for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 20:42:25 compute-1 sudo[83948]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:25 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2935510662' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 20:42:25 compute-1 ceph-mon[80135]: pgmap v12: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 4 op/s
Nov 23 20:42:25 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:25 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:25 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:25 compute-1 ceph-mon[80135]: Deploying daemon node-exporter.compute-2 on compute-2
Nov 23 20:42:26 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:42:27 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1987053989' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 23 20:42:28 compute-1 ceph-mon[80135]: pgmap v13: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s
Nov 23 20:42:29 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:29 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:29 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:29 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:29 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 20:42:29 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 20:42:29 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:42:30 compute-1 ceph-mon[80135]: from='client.14502 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 23 20:42:30 compute-1 ceph-mon[80135]: pgmap v14: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 0 B/s wr, 2 op/s
Nov 23 20:42:31 compute-1 ceph-mon[80135]: from='client.14508 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 23 20:42:31 compute-1 sshd-session[84225]: Invalid user debianuser from 34.91.0.68 port 42148
Nov 23 20:42:31 compute-1 sshd-session[84225]: Received disconnect from 34.91.0.68 port 42148:11: Bye Bye [preauth]
Nov 23 20:42:31 compute-1 sshd-session[84225]: Disconnected from invalid user debianuser 34.91.0.68 port 42148 [preauth]
Nov 23 20:42:31 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:42:32 compute-1 ceph-mon[80135]: pgmap v15: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:42:33 compute-1 ceph-mon[80135]: from='client.14514 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 23 20:42:33 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:33 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:33 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.cwocqr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 23 20:42:33 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.cwocqr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 23 20:42:33 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:33 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:42:33 compute-1 ceph-mon[80135]: Deploying daemon rgw.rgw.compute-2.cwocqr on compute-2
Nov 23 20:42:33 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:34 compute-1 sudo[84227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:42:34 compute-1 sudo[84227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:34 compute-1 sudo[84227]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:34 compute-1 sudo[84252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:42:34 compute-1 sudo[84252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:34 compute-1 ceph-mon[80135]: pgmap v16: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:42:34 compute-1 ceph-mon[80135]: from='client.14520 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 23 20:42:34 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:34 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:34 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:34 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.exwrda", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 23 20:42:34 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.exwrda", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 23 20:42:34 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:34 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:42:35 compute-1 podman[84317]: 2025-11-23 20:42:35.100255854 +0000 UTC m=+0.040966830 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:42:35 compute-1 podman[84317]: 2025-11-23 20:42:35.230472475 +0000 UTC m=+0.171183431 container create a1c060fb663cb949795d5899a097ddf6c2abfb145c1215805978c43509c15c98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 23 20:42:35 compute-1 systemd[1]: Started libpod-conmon-a1c060fb663cb949795d5899a097ddf6c2abfb145c1215805978c43509c15c98.scope.
Nov 23 20:42:35 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:42:35 compute-1 podman[84317]: 2025-11-23 20:42:35.386556875 +0000 UTC m=+0.327267851 container init a1c060fb663cb949795d5899a097ddf6c2abfb145c1215805978c43509c15c98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 23 20:42:35 compute-1 podman[84317]: 2025-11-23 20:42:35.397281376 +0000 UTC m=+0.337992332 container start a1c060fb663cb949795d5899a097ddf6c2abfb145c1215805978c43509c15c98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 23 20:42:35 compute-1 adoring_colden[84332]: 167 167
Nov 23 20:42:35 compute-1 systemd[1]: libpod-a1c060fb663cb949795d5899a097ddf6c2abfb145c1215805978c43509c15c98.scope: Deactivated successfully.
Nov 23 20:42:35 compute-1 conmon[84332]: conmon a1c060fb663cb949795d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a1c060fb663cb949795d5899a097ddf6c2abfb145c1215805978c43509c15c98.scope/container/memory.events
Nov 23 20:42:35 compute-1 podman[84317]: 2025-11-23 20:42:35.45535648 +0000 UTC m=+0.396067436 container attach a1c060fb663cb949795d5899a097ddf6c2abfb145c1215805978c43509c15c98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_colden, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 20:42:35 compute-1 podman[84317]: 2025-11-23 20:42:35.456205251 +0000 UTC m=+0.396916207 container died a1c060fb663cb949795d5899a097ddf6c2abfb145c1215805978c43509c15c98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_colden, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 20:42:35 compute-1 systemd[1]: var-lib-containers-storage-overlay-fcec6bf971eda074fd2853f7883ea835e2ff20397a05c0103263b591c7f3f0ac-merged.mount: Deactivated successfully.
Nov 23 20:42:35 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e42 e42: 3 total, 3 up, 3 in
Nov 23 20:42:35 compute-1 podman[84317]: 2025-11-23 20:42:35.628902226 +0000 UTC m=+0.569613182 container remove a1c060fb663cb949795d5899a097ddf6c2abfb145c1215805978c43509c15c98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_colden, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 20:42:35 compute-1 systemd[1]: libpod-conmon-a1c060fb663cb949795d5899a097ddf6c2abfb145c1215805978c43509c15c98.scope: Deactivated successfully.
Nov 23 20:42:35 compute-1 systemd[1]: Reloading.
Nov 23 20:42:35 compute-1 systemd-rc-local-generator[84379]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:42:35 compute-1 ceph-mon[80135]: Deploying daemon rgw.rgw.compute-1.exwrda on compute-1
Nov 23 20:42:35 compute-1 ceph-mon[80135]: pgmap v17: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:42:35 compute-1 ceph-mon[80135]: osdmap e42: 3 total, 3 up, 3 in
Nov 23 20:42:35 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1418789177' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 23 20:42:35 compute-1 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 23 20:42:35 compute-1 systemd-sysv-generator[84382]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:42:35 compute-1 systemd[1]: Reloading.
Nov 23 20:42:35 compute-1 systemd-rc-local-generator[84420]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:42:35 compute-1 systemd-sysv-generator[84423]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:42:36 compute-1 systemd[1]: Starting Ceph rgw.rgw.compute-1.exwrda for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 20:42:36 compute-1 podman[84479]: 2025-11-23 20:42:36.395616966 +0000 UTC m=+0.040042017 container create 29b10272a4e9e5cf00b639059e82bf80ac94e4fab69c520ec6e9080d5ceb68c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-rgw-rgw-compute-1-exwrda, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 20:42:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e62e0742dadffaf92198a8f5dab2c9e91c3c278e894a753e4c2e260330598c5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 20:42:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e62e0742dadffaf92198a8f5dab2c9e91c3c278e894a753e4c2e260330598c5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 20:42:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e62e0742dadffaf92198a8f5dab2c9e91c3c278e894a753e4c2e260330598c5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 20:42:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e62e0742dadffaf92198a8f5dab2c9e91c3c278e894a753e4c2e260330598c5b/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-1.exwrda supports timestamps until 2038 (0x7fffffff)
Nov 23 20:42:36 compute-1 podman[84479]: 2025-11-23 20:42:36.45452591 +0000 UTC m=+0.098950981 container init 29b10272a4e9e5cf00b639059e82bf80ac94e4fab69c520ec6e9080d5ceb68c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-rgw-rgw-compute-1-exwrda, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 20:42:36 compute-1 podman[84479]: 2025-11-23 20:42:36.460718381 +0000 UTC m=+0.105143432 container start 29b10272a4e9e5cf00b639059e82bf80ac94e4fab69c520ec6e9080d5ceb68c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-rgw-rgw-compute-1-exwrda, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 23 20:42:36 compute-1 bash[84479]: 29b10272a4e9e5cf00b639059e82bf80ac94e4fab69c520ec6e9080d5ceb68c1
Nov 23 20:42:36 compute-1 podman[84479]: 2025-11-23 20:42:36.37977524 +0000 UTC m=+0.024200321 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:42:36 compute-1 systemd[1]: Started Ceph rgw.rgw.compute-1.exwrda for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 20:42:36 compute-1 radosgw[84498]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 23 20:42:36 compute-1 radosgw[84498]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Nov 23 20:42:36 compute-1 radosgw[84498]: framework: beast
Nov 23 20:42:36 compute-1 radosgw[84498]: framework conf key: endpoint, val: 192.168.122.101:8082
Nov 23 20:42:36 compute-1 radosgw[84498]: init_numa not setting numa affinity
Nov 23 20:42:36 compute-1 sudo[84252]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:36 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e43 e43: 3 total, 3 up, 3 in
Nov 23 20:42:36 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:42:36 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/4010806180' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 23 20:42:36 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:36 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:36 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:36 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.lntkpb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 23 20:42:36 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.lntkpb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 23 20:42:36 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:36 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:42:36 compute-1 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 23 20:42:36 compute-1 ceph-mon[80135]: osdmap e43: 3 total, 3 up, 3 in
Nov 23 20:42:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e44 e44: 3 total, 3 up, 3 in
Nov 23 20:42:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Nov 23 20:42:37 compute-1 ceph-mon[80135]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/4191610001' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 23 20:42:37 compute-1 ceph-mon[80135]: Deploying daemon rgw.rgw.compute-0.lntkpb on compute-0
Nov 23 20:42:37 compute-1 ceph-mon[80135]: pgmap v20: 133 pgs: 1 unknown, 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:42:37 compute-1 ceph-mon[80135]: osdmap e44: 3 total, 3 up, 3 in
Nov 23 20:42:37 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/141380246' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 23 20:42:37 compute-1 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 23 20:42:37 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/4191610001' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 23 20:42:37 compute-1 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 23 20:42:37 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2408235100' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 23 20:42:37 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 44 pg[10.0( empty local-lis/les=0/0 n=0 ec=44/44 lis/c=0/0 les/c/f=0/0/0 sis=44) [0] r=0 lpr=44 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:42:39 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e45 e45: 3 total, 3 up, 3 in
Nov 23 20:42:39 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 45 pg[10.0( empty local-lis/les=44/45 n=0 ec=44/44 lis/c=0/0 les/c/f=0/0/0 sis=44) [0] r=0 lpr=44 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:42:39 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:39 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:39 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:39 compute-1 ceph-mon[80135]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 23 20:42:39 compute-1 ceph-mon[80135]: pgmap v22: 134 pgs: 2 unknown, 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:42:39 compute-1 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 23 20:42:39 compute-1 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 23 20:42:39 compute-1 ceph-mon[80135]: osdmap e45: 3 total, 3 up, 3 in
Nov 23 20:42:39 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:39 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:39 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.utubtn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 23 20:42:39 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.utubtn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 23 20:42:39 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:42:39 compute-1 ceph-mon[80135]: Deploying daemon mds.cephfs.compute-2.utubtn on compute-2
Nov 23 20:42:39 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3456864184' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 23 20:42:40 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e46 e46: 3 total, 3 up, 3 in
Nov 23 20:42:40 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Nov 23 20:42:40 compute-1 ceph-mon[80135]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/4191610001' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 23 20:42:40 compute-1 ceph-mon[80135]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 23 20:42:40 compute-1 ceph-mon[80135]: osdmap e46: 3 total, 3 up, 3 in
Nov 23 20:42:40 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/141380246' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 23 20:42:40 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/4054506421' entity='client.rgw.rgw.compute-0.lntkpb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 23 20:42:40 compute-1 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 23 20:42:40 compute-1 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 23 20:42:40 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/4191610001' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 23 20:42:41 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e47 e47: 3 total, 3 up, 3 in
Nov 23 20:42:41 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:42:42 compute-1 ceph-mon[80135]: pgmap v25: 135 pgs: 1 creating+peering, 134 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 1.9 KiB/s wr, 10 op/s
Nov 23 20:42:42 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/4054506421' entity='client.rgw.rgw.compute-0.lntkpb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 23 20:42:42 compute-1 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 23 20:42:42 compute-1 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 23 20:42:42 compute-1 ceph-mon[80135]: osdmap e47: 3 total, 3 up, 3 in
Nov 23 20:42:42 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:42 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:42 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:42 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jcbopz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 23 20:42:42 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jcbopz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 23 20:42:42 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/421589308' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 23 20:42:42 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:42:42 compute-1 ceph-mon[80135]: Deploying daemon mds.cephfs.compute-0.jcbopz on compute-0
Nov 23 20:42:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e48 e48: 3 total, 3 up, 3 in
Nov 23 20:42:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Nov 23 20:42:42 compute-1 ceph-mon[80135]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/4191610001' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 23 20:42:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).mds e3 new map
Nov 23 20:42:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).mds e3 print_map
                                           e3
                                           btime 2025-11-23T20:42:42:276651+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-23T20:42:15.389822+0000
                                           modified        2025-11-23T20:42:15.389822+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.utubtn{-1:24181} state up:standby seq 1 addr [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] compat {c=[1],r=[1],i=[1fff]}]
Nov 23 20:42:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).mds e4 new map
Nov 23 20:42:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).mds e4 print_map
                                           e4
                                           btime 2025-11-23T20:42:42:291982+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-23T20:42:15.389822+0000
                                           modified        2025-11-23T20:42:42.291972+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24181}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-2.utubtn{0:24181} state up:creating seq 1 addr [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Nov 23 20:42:42 compute-1 sudo[85086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:42:42 compute-1 sudo[85086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:42 compute-1 sudo[85086]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:42 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 48 pg[12.0( empty local-lis/les=0/0 n=0 ec=48/48 lis/c=0/0 les/c/f=0/0/0 sis=48) [0] r=0 lpr=48 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:42:42 compute-1 sudo[85111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:42:42 compute-1 sudo[85111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:43 compute-1 ceph-mon[80135]: osdmap e48: 3 total, 3 up, 3 in
Nov 23 20:42:43 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/4054506421' entity='client.rgw.rgw.compute-0.lntkpb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 23 20:42:43 compute-1 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 23 20:42:43 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/4191610001' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 23 20:42:43 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/141380246' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 23 20:42:43 compute-1 ceph-mon[80135]: mds.? [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] up:boot
Nov 23 20:42:43 compute-1 ceph-mon[80135]: daemon mds.cephfs.compute-2.utubtn assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 23 20:42:43 compute-1 ceph-mon[80135]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 23 20:42:43 compute-1 ceph-mon[80135]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 23 20:42:43 compute-1 ceph-mon[80135]: fsmap cephfs:0 1 up:standby
Nov 23 20:42:43 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.utubtn"}]: dispatch
Nov 23 20:42:43 compute-1 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 23 20:42:43 compute-1 ceph-mon[80135]: fsmap cephfs:1 {0=cephfs.compute-2.utubtn=up:creating}
Nov 23 20:42:43 compute-1 ceph-mon[80135]: daemon mds.cephfs.compute-2.utubtn is now active in filesystem cephfs as rank 0
Nov 23 20:42:43 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:43 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:43 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:43 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.gmfhnm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 23 20:42:43 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.gmfhnm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 23 20:42:43 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:42:43 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:43 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e49 e49: 3 total, 3 up, 3 in
Nov 23 20:42:43 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 49 pg[12.0( empty local-lis/les=48/49 n=0 ec=48/48 lis/c=0/0 les/c/f=0/0/0 sis=48) [0] r=0 lpr=48 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:42:43 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).mds e5 new map
Nov 23 20:42:43 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).mds e5 print_map
                                           e5
                                           btime 2025-11-23T20:42:43:300630+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-23T20:42:15.389822+0000
                                           modified        2025-11-23T20:42:43.300628+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24181}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24181 members: 24181
                                           [mds.cephfs.compute-2.utubtn{0:24181} state up:active seq 2 addr [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.jcbopz{-1:14580} state up:standby seq 1 addr [v2:192.168.122.100:6806/3257423559,v1:192.168.122.100:6807/3257423559] compat {c=[1],r=[1],i=[1fff]}]
Nov 23 20:42:43 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Nov 23 20:42:43 compute-1 ceph-mon[80135]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/4191610001' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 23 20:42:43 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).mds e6 new map
Nov 23 20:42:43 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).mds e6 print_map
                                           e6
                                           btime 2025-11-23T20:42:43:320643+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-23T20:42:15.389822+0000
                                           modified        2025-11-23T20:42:43.300628+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24181}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24181 members: 24181
                                           [mds.cephfs.compute-2.utubtn{0:24181} state up:active seq 2 addr [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.jcbopz{-1:14580} state up:standby seq 1 addr [v2:192.168.122.100:6806/3257423559,v1:192.168.122.100:6807/3257423559] compat {c=[1],r=[1],i=[1fff]}]
Nov 23 20:42:43 compute-1 podman[85176]: 2025-11-23 20:42:43.348672673 +0000 UTC m=+0.038459619 container create 8e0eaa7535316345fa465ad362d1eee43a8cb2778aec3532a5ee814fe16163f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lalande, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 23 20:42:43 compute-1 systemd[1]: Started libpod-conmon-8e0eaa7535316345fa465ad362d1eee43a8cb2778aec3532a5ee814fe16163f5.scope.
Nov 23 20:42:43 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:42:43 compute-1 podman[85176]: 2025-11-23 20:42:43.426702463 +0000 UTC m=+0.116489429 container init 8e0eaa7535316345fa465ad362d1eee43a8cb2778aec3532a5ee814fe16163f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lalande, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 23 20:42:43 compute-1 podman[85176]: 2025-11-23 20:42:43.332039627 +0000 UTC m=+0.021826593 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:42:43 compute-1 podman[85176]: 2025-11-23 20:42:43.433518328 +0000 UTC m=+0.123305274 container start 8e0eaa7535316345fa465ad362d1eee43a8cb2778aec3532a5ee814fe16163f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lalande, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 23 20:42:43 compute-1 podman[85176]: 2025-11-23 20:42:43.436538302 +0000 UTC m=+0.126325278 container attach 8e0eaa7535316345fa465ad362d1eee43a8cb2778aec3532a5ee814fe16163f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True)
Nov 23 20:42:43 compute-1 competent_lalande[85191]: 167 167
Nov 23 20:42:43 compute-1 systemd[1]: libpod-8e0eaa7535316345fa465ad362d1eee43a8cb2778aec3532a5ee814fe16163f5.scope: Deactivated successfully.
Nov 23 20:42:43 compute-1 podman[85176]: 2025-11-23 20:42:43.439076273 +0000 UTC m=+0.128863219 container died 8e0eaa7535316345fa465ad362d1eee43a8cb2778aec3532a5ee814fe16163f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lalande, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 23 20:42:43 compute-1 systemd[1]: var-lib-containers-storage-overlay-88f227109d53bd65d9cd9ce01490e3ff5c635129845f762ff145e7e43b658a1e-merged.mount: Deactivated successfully.
Nov 23 20:42:43 compute-1 podman[85176]: 2025-11-23 20:42:43.476473314 +0000 UTC m=+0.166260270 container remove 8e0eaa7535316345fa465ad362d1eee43a8cb2778aec3532a5ee814fe16163f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 20:42:43 compute-1 systemd[1]: libpod-conmon-8e0eaa7535316345fa465ad362d1eee43a8cb2778aec3532a5ee814fe16163f5.scope: Deactivated successfully.
Nov 23 20:42:43 compute-1 systemd[1]: Reloading.
Nov 23 20:42:43 compute-1 systemd-rc-local-generator[85237]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:42:43 compute-1 systemd-sysv-generator[85241]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:42:43 compute-1 systemd[1]: Reloading.
Nov 23 20:42:43 compute-1 systemd-rc-local-generator[85277]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:42:43 compute-1 systemd-sysv-generator[85280]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:42:44 compute-1 systemd[1]: Starting Ceph mds.cephfs.compute-1.gmfhnm for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 20:42:44 compute-1 podman[85333]: 2025-11-23 20:42:44.243100502 +0000 UTC m=+0.041292767 container create 80ba811dbdb9350860999bfbe11c3c2b025911441594fc68256ee132d5b2f265 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mds-cephfs-compute-1-gmfhnm, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 20:42:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed2cef276f6913a86c842440362a727d6e5da9d15277bccd093da7207adca4ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 20:42:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed2cef276f6913a86c842440362a727d6e5da9d15277bccd093da7207adca4ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 20:42:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed2cef276f6913a86c842440362a727d6e5da9d15277bccd093da7207adca4ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 20:42:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed2cef276f6913a86c842440362a727d6e5da9d15277bccd093da7207adca4ba/merged/var/lib/ceph/mds/ceph-cephfs.compute-1.gmfhnm supports timestamps until 2038 (0x7fffffff)
Nov 23 20:42:44 compute-1 podman[85333]: 2025-11-23 20:42:44.302517789 +0000 UTC m=+0.100710064 container init 80ba811dbdb9350860999bfbe11c3c2b025911441594fc68256ee132d5b2f265 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mds-cephfs-compute-1-gmfhnm, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 23 20:42:44 compute-1 podman[85333]: 2025-11-23 20:42:44.308258098 +0000 UTC m=+0.106450353 container start 80ba811dbdb9350860999bfbe11c3c2b025911441594fc68256ee132d5b2f265 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mds-cephfs-compute-1-gmfhnm, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 23 20:42:44 compute-1 bash[85333]: 80ba811dbdb9350860999bfbe11c3c2b025911441594fc68256ee132d5b2f265
Nov 23 20:42:44 compute-1 podman[85333]: 2025-11-23 20:42:44.22534757 +0000 UTC m=+0.023539845 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:42:44 compute-1 systemd[1]: Started Ceph mds.cephfs.compute-1.gmfhnm for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 20:42:44 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e50 e50: 3 total, 3 up, 3 in
Nov 23 20:42:44 compute-1 ceph-mon[80135]: Deploying daemon mds.cephfs.compute-1.gmfhnm on compute-1
Nov 23 20:42:44 compute-1 ceph-mon[80135]: pgmap v28: 136 pgs: 1 unknown, 1 creating+peering, 134 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 2.0 KiB/s wr, 11 op/s
Nov 23 20:42:44 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/4054506421' entity='client.rgw.rgw.compute-0.lntkpb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 23 20:42:44 compute-1 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 23 20:42:44 compute-1 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 23 20:42:44 compute-1 ceph-mon[80135]: osdmap e49: 3 total, 3 up, 3 in
Nov 23 20:42:44 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/4054506421' entity='client.rgw.rgw.compute-0.lntkpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 23 20:42:44 compute-1 ceph-mon[80135]: mds.? [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] up:active
Nov 23 20:42:44 compute-1 ceph-mon[80135]: mds.? [v2:192.168.122.100:6806/3257423559,v1:192.168.122.100:6807/3257423559] up:boot
Nov 23 20:42:44 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/4191610001' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 23 20:42:44 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/141380246' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 23 20:42:44 compute-1 ceph-mon[80135]: fsmap cephfs:1 {0=cephfs.compute-2.utubtn=up:active} 1 up:standby
Nov 23 20:42:44 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.jcbopz"}]: dispatch
Nov 23 20:42:44 compute-1 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 23 20:42:44 compute-1 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 23 20:42:44 compute-1 ceph-mon[80135]: fsmap cephfs:1 {0=cephfs.compute-2.utubtn=up:active} 1 up:standby
Nov 23 20:42:44 compute-1 ceph-mds[85352]: set uid:gid to 167:167 (ceph:ceph)
Nov 23 20:42:44 compute-1 ceph-mds[85352]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Nov 23 20:42:44 compute-1 ceph-mds[85352]: main not setting numa affinity
Nov 23 20:42:44 compute-1 ceph-mds[85352]: pidfile_write: ignore empty --pid-file
Nov 23 20:42:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mds-cephfs-compute-1-gmfhnm[85348]: starting mds.cephfs.compute-1.gmfhnm at 
Nov 23 20:42:44 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Updating MDS map to version 6 from mon.2
Nov 23 20:42:44 compute-1 sudo[85111]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:44 compute-1 radosgw[84498]: v1 topic migration: starting v1 topic migration..
Nov 23 20:42:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-rgw-rgw-compute-1-exwrda[84494]: 2025-11-23T20:42:44.571+0000 7f84b3dee980 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 23 20:42:44 compute-1 radosgw[84498]: LDAP not started since no server URIs were provided in the configuration.
Nov 23 20:42:44 compute-1 radosgw[84498]: v1 topic migration: finished v1 topic migration
Nov 23 20:42:44 compute-1 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Nov 23 20:42:44 compute-1 radosgw[84498]: framework: beast
Nov 23 20:42:44 compute-1 radosgw[84498]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 23 20:42:44 compute-1 radosgw[84498]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 23 20:42:44 compute-1 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 23 20:42:44 compute-1 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 23 20:42:44 compute-1 radosgw[84498]: starting handler: beast
Nov 23 20:42:44 compute-1 radosgw[84498]: set uid:gid to 167:167 (ceph:ceph)
Nov 23 20:42:44 compute-1 radosgw[84498]: mgrc service_daemon_register rgw.24260 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.101:8082,frontend_type#0=beast,hostname=compute-1,id=rgw.compute-1.exwrda,kernel_description=#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025,kernel_version=5.14.0-639.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=7b74c4d0-333d-4a78-943d-fd3c4abdfa87,zone_name=default,zonegroup_id=3560ca63-18fc-44aa-8d4c-f5d89c554a9f,zonegroup_name=default}
Nov 23 20:42:44 compute-1 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 23 20:42:44 compute-1 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 23 20:42:44 compute-1 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 23 20:42:44 compute-1 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 23 20:42:44 compute-1 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Nov 23 20:42:44 compute-1 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 23 20:42:44 compute-1 sudo[85405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:42:44 compute-1 sudo[85405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:44 compute-1 sudo[85405]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:44 compute-1 sudo[85430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:42:44 compute-1 sudo[85430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:45 compute-1 podman[85493]: 2025-11-23 20:42:45.215887499 +0000 UTC m=+0.040585019 container create bf271ae442860ffb8b73185e132ae6ece90af78b5700f36c11375d20b301a1a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_goldberg, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 20:42:45 compute-1 systemd[1]: Started libpod-conmon-bf271ae442860ffb8b73185e132ae6ece90af78b5700f36c11375d20b301a1a0.scope.
Nov 23 20:42:45 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:42:45 compute-1 podman[85493]: 2025-11-23 20:42:45.196514228 +0000 UTC m=+0.021211768 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:42:45 compute-1 podman[85493]: 2025-11-23 20:42:45.301648618 +0000 UTC m=+0.126346228 container init bf271ae442860ffb8b73185e132ae6ece90af78b5700f36c11375d20b301a1a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 23 20:42:45 compute-1 podman[85493]: 2025-11-23 20:42:45.308194507 +0000 UTC m=+0.132892037 container start bf271ae442860ffb8b73185e132ae6ece90af78b5700f36c11375d20b301a1a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 20:42:45 compute-1 podman[85493]: 2025-11-23 20:42:45.312849411 +0000 UTC m=+0.137546951 container attach bf271ae442860ffb8b73185e132ae6ece90af78b5700f36c11375d20b301a1a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_goldberg, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 20:42:45 compute-1 goofy_goldberg[85510]: 167 167
Nov 23 20:42:45 compute-1 systemd[1]: libpod-bf271ae442860ffb8b73185e132ae6ece90af78b5700f36c11375d20b301a1a0.scope: Deactivated successfully.
Nov 23 20:42:45 compute-1 podman[85493]: 2025-11-23 20:42:45.314696026 +0000 UTC m=+0.139393576 container died bf271ae442860ffb8b73185e132ae6ece90af78b5700f36c11375d20b301a1a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 23 20:42:45 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/4054506421' entity='client.rgw.rgw.compute-0.lntkpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 23 20:42:45 compute-1 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 23 20:42:45 compute-1 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 23 20:42:45 compute-1 ceph-mon[80135]: osdmap e50: 3 total, 3 up, 3 in
Nov 23 20:42:45 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:45 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:45 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:45 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:45 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:45 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:45 compute-1 ceph-mon[80135]: Creating key for client.nfs.cephfs.0.0.compute-1.fuxuha
Nov 23 20:42:45 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.fuxuha", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 23 20:42:45 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.fuxuha", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 23 20:42:45 compute-1 ceph-mon[80135]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Nov 23 20:42:45 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 23 20:42:45 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 23 20:42:45 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:42:45 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 23 20:42:45 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 23 20:42:45 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.fuxuha-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 23 20:42:45 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.fuxuha-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 23 20:42:45 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:42:45 compute-1 systemd[1]: var-lib-containers-storage-overlay-eb50ee9993dcf5154d2c27100485101537cd26c0ff85a4fced92bc7ac12309be-merged.mount: Deactivated successfully.
Nov 23 20:42:45 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).mds e7 new map
Nov 23 20:42:45 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).mds e7 print_map
                                           e7
                                           btime 2025-11-23T20:42:45:339402+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-23T20:42:15.389822+0000
                                           modified        2025-11-23T20:42:43.300628+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24181}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24181 members: 24181
                                           [mds.cephfs.compute-2.utubtn{0:24181} state up:active seq 2 addr [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.jcbopz{-1:14580} state up:standby seq 1 addr [v2:192.168.122.100:6806/3257423559,v1:192.168.122.100:6807/3257423559] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.gmfhnm{-1:24284} state up:standby seq 1 addr [v2:192.168.122.101:6804/3633651935,v1:192.168.122.101:6805/3633651935] compat {c=[1],r=[1],i=[1fff]}]
Nov 23 20:42:45 compute-1 podman[85493]: 2025-11-23 20:42:45.551686286 +0000 UTC m=+0.376383806 container remove bf271ae442860ffb8b73185e132ae6ece90af78b5700f36c11375d20b301a1a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_goldberg, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 23 20:42:45 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Updating MDS map to version 7 from mon.2
Nov 23 20:42:45 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Monitors have assigned me to become a standby
Nov 23 20:42:45 compute-1 systemd[1]: libpod-conmon-bf271ae442860ffb8b73185e132ae6ece90af78b5700f36c11375d20b301a1a0.scope: Deactivated successfully.
Nov 23 20:42:45 compute-1 systemd[1]: Reloading.
Nov 23 20:42:45 compute-1 systemd-rc-local-generator[85559]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:42:45 compute-1 systemd-sysv-generator[85563]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:42:45 compute-1 systemd[1]: Reloading.
Nov 23 20:42:45 compute-1 systemd-sysv-generator[85600]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:42:45 compute-1 systemd-rc-local-generator[85597]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:42:46 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 20:42:46 compute-1 podman[85653]: 2025-11-23 20:42:46.358127693 +0000 UTC m=+0.037333519 container create 466d10d0fad1c5a4f86b3f6ff6a62c2f5b4e27c7206b481850c43d696b989539 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 20:42:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8322ba23651b391cd38f2980d80d3d4d5a77a2d7c68fccc64436bbb1b0ee305/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 20:42:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8322ba23651b391cd38f2980d80d3d4d5a77a2d7c68fccc64436bbb1b0ee305/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 20:42:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8322ba23651b391cd38f2980d80d3d4d5a77a2d7c68fccc64436bbb1b0ee305/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 20:42:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8322ba23651b391cd38f2980d80d3d4d5a77a2d7c68fccc64436bbb1b0ee305/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.fuxuha-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 20:42:46 compute-1 podman[85653]: 2025-11-23 20:42:46.417352316 +0000 UTC m=+0.096558132 container init 466d10d0fad1c5a4f86b3f6ff6a62c2f5b4e27c7206b481850c43d696b989539 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 23 20:42:46 compute-1 podman[85653]: 2025-11-23 20:42:46.422347627 +0000 UTC m=+0.101553413 container start 466d10d0fad1c5a4f86b3f6ff6a62c2f5b4e27c7206b481850c43d696b989539 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 20:42:46 compute-1 bash[85653]: 466d10d0fad1c5a4f86b3f6ff6a62c2f5b4e27c7206b481850c43d696b989539
Nov 23 20:42:46 compute-1 podman[85653]: 2025-11-23 20:42:46.340283548 +0000 UTC m=+0.019489364 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:42:46 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 20:42:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 20:42:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 20:42:46 compute-1 sudo[85430]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 20:42:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 20:42:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 20:42:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 20:42:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 20:42:46 compute-1 ceph-mon[80135]: Rados config object exists: conf-nfs.cephfs
Nov 23 20:42:46 compute-1 ceph-mon[80135]: Creating key for client.nfs.cephfs.0.0.compute-1.fuxuha-rgw
Nov 23 20:42:46 compute-1 ceph-mon[80135]: Bind address in nfs.cephfs.0.0.compute-1.fuxuha's ganesha conf is defaulting to empty
Nov 23 20:42:46 compute-1 ceph-mon[80135]: Deploying daemon nfs.cephfs.0.0.compute-1.fuxuha on compute-1
Nov 23 20:42:46 compute-1 ceph-mon[80135]: pgmap v31: 136 pgs: 1 unknown, 1 creating+peering, 134 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:42:46 compute-1 ceph-mon[80135]: mds.? [v2:192.168.122.101:6804/3633651935,v1:192.168.122.101:6805/3633651935] up:boot
Nov 23 20:42:46 compute-1 ceph-mon[80135]: fsmap cephfs:1 {0=cephfs.compute-2.utubtn=up:active} 2 up:standby
Nov 23 20:42:46 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.gmfhnm"}]: dispatch
Nov 23 20:42:46 compute-1 ceph-mon[80135]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 23 20:42:46 compute-1 ceph-mon[80135]: Cluster is now healthy
Nov 23 20:42:46 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 20:42:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Nov 23 20:42:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Nov 23 20:42:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 20:42:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:42:46 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).mds e8 new map
Nov 23 20:42:46 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).mds e8 print_map
                                           e8
                                           btime 2025-11-23T20:42:46:698669+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-23T20:42:15.389822+0000
                                           modified        2025-11-23T20:42:46.341150+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24181}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24181 members: 24181
                                           [mds.cephfs.compute-2.utubtn{0:24181} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.jcbopz{-1:14580} state up:standby seq 1 addr [v2:192.168.122.100:6806/3257423559,v1:192.168.122.100:6807/3257423559] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.gmfhnm{-1:24284} state up:standby seq 1 addr [v2:192.168.122.101:6804/3633651935,v1:192.168.122.101:6805/3633651935] compat {c=[1],r=[1],i=[1fff]}]
Nov 23 20:42:46 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:42:47 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:47 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:47 compute-1 ceph-mon[80135]: Creating key for client.nfs.cephfs.1.0.compute-2.dqbktw
Nov 23 20:42:47 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqbktw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 23 20:42:47 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqbktw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 23 20:42:47 compute-1 ceph-mon[80135]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Nov 23 20:42:47 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 23 20:42:47 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 23 20:42:47 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:42:47 compute-1 ceph-mon[80135]: mds.? [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] up:active
Nov 23 20:42:47 compute-1 ceph-mon[80135]: fsmap cephfs:1 {0=cephfs.compute-2.utubtn=up:active} 2 up:standby
Nov 23 20:42:47 compute-1 ceph-mon[80135]: pgmap v32: 136 pgs: 136 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 146 KiB/s rd, 8.8 KiB/s wr, 274 op/s
Nov 23 20:42:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).mds e9 new map
Nov 23 20:42:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).mds e9 print_map
                                           e9
                                           btime 2025-11-23T20:42:47:710992+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-23T20:42:15.389822+0000
                                           modified        2025-11-23T20:42:46.341150+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24181}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24181 members: 24181
                                           [mds.cephfs.compute-2.utubtn{0:24181} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.jcbopz{-1:14580} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3257423559,v1:192.168.122.100:6807/3257423559] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.gmfhnm{-1:24284} state up:standby seq 1 addr [v2:192.168.122.101:6804/3633651935,v1:192.168.122.101:6805/3633651935] compat {c=[1],r=[1],i=[1fff]}]
Nov 23 20:42:48 compute-1 ceph-mon[80135]: mds.? [v2:192.168.122.100:6806/3257423559,v1:192.168.122.100:6807/3257423559] up:standby
Nov 23 20:42:48 compute-1 ceph-mon[80135]: fsmap cephfs:1 {0=cephfs.compute-2.utubtn=up:active} 2 up:standby
Nov 23 20:42:48 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:49 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).mds e10 new map
Nov 23 20:42:49 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).mds e10 print_map
                                           e10
                                           btime 2025-11-23T20:42:49:046556+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-23T20:42:15.389822+0000
                                           modified        2025-11-23T20:42:46.341150+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24181}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24181 members: 24181
                                           [mds.cephfs.compute-2.utubtn{0:24181} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.jcbopz{-1:14580} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3257423559,v1:192.168.122.100:6807/3257423559] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.gmfhnm{-1:24284} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/3633651935,v1:192.168.122.101:6805/3633651935] compat {c=[1],r=[1],i=[1fff]}]
Nov 23 20:42:49 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Updating MDS map to version 10 from mon.2
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000001:nfs.cephfs.0: -2
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 20:42:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 20:42:50 compute-1 ceph-mon[80135]: pgmap v33: 136 pgs: 136 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 7.5 KiB/s wr, 233 op/s
Nov 23 20:42:50 compute-1 ceph-mon[80135]: mds.? [v2:192.168.122.101:6804/3633651935,v1:192.168.122.101:6805/3633651935] up:standby
Nov 23 20:42:50 compute-1 ceph-mon[80135]: fsmap cephfs:1 {0=cephfs.compute-2.utubtn=up:active} 2 up:standby
Nov 23 20:42:50 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 23 20:42:50 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 23 20:42:50 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqbktw-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 23 20:42:50 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqbktw-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 23 20:42:50 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:42:51 compute-1 ceph-mon[80135]: Rados config object exists: conf-nfs.cephfs
Nov 23 20:42:51 compute-1 ceph-mon[80135]: Creating key for client.nfs.cephfs.1.0.compute-2.dqbktw-rgw
Nov 23 20:42:51 compute-1 ceph-mon[80135]: Bind address in nfs.cephfs.1.0.compute-2.dqbktw's ganesha conf is defaulting to empty
Nov 23 20:42:51 compute-1 ceph-mon[80135]: Deploying daemon nfs.cephfs.1.0.compute-2.dqbktw on compute-2
Nov 23 20:42:51 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:42:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 20:42:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 20:42:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:42:52 compute-1 ceph-mon[80135]: pgmap v34: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 7.1 KiB/s wr, 284 op/s
Nov 23 20:42:52 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:52 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:52 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:52 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bfglcy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 23 20:42:52 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bfglcy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 23 20:42:52 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 23 20:42:52 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 23 20:42:52 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:42:53 compute-1 ceph-mon[80135]: Creating key for client.nfs.cephfs.2.0.compute-0.bfglcy
Nov 23 20:42:53 compute-1 ceph-mon[80135]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Nov 23 20:42:53 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:53 compute-1 sshd-session[85723]: Invalid user user2 from 43.225.142.116 port 44070
Nov 23 20:42:53 compute-1 sshd-session[85723]: Received disconnect from 43.225.142.116 port 44070:11: Bye Bye [preauth]
Nov 23 20:42:53 compute-1 sshd-session[85723]: Disconnected from invalid user user2 43.225.142.116 port 44070 [preauth]
Nov 23 20:42:54 compute-1 ceph-mon[80135]: pgmap v35: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 5.9 KiB/s wr, 237 op/s
Nov 23 20:42:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:54 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 20:42:55 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 23 20:42:55 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 23 20:42:55 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bfglcy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 23 20:42:55 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bfglcy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 23 20:42:55 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:42:56 compute-1 ceph-mon[80135]: pgmap v36: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 5.4 KiB/s wr, 215 op/s
Nov 23 20:42:56 compute-1 ceph-mon[80135]: Rados config object exists: conf-nfs.cephfs
Nov 23 20:42:56 compute-1 ceph-mon[80135]: Creating key for client.nfs.cephfs.2.0.compute-0.bfglcy-rgw
Nov 23 20:42:56 compute-1 ceph-mon[80135]: Bind address in nfs.cephfs.2.0.compute-0.bfglcy's ganesha conf is defaulting to empty
Nov 23 20:42:56 compute-1 ceph-mon[80135]: Deploying daemon nfs.cephfs.2.0.compute-0.bfglcy on compute-0
Nov 23 20:42:56 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:42:57 compute-1 sudo[85725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:42:57 compute-1 sudo[85725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:57 compute-1 sudo[85725]: pam_unix(sudo:session): session closed for user root
Nov 23 20:42:57 compute-1 sudo[85750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:42:57 compute-1 sudo[85750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:42:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 20:42:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 20:42:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:42:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:42:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:42:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 20:42:58 compute-1 ceph-mon[80135]: pgmap v37: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 5.6 KiB/s wr, 192 op/s
Nov 23 20:42:58 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:58 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:58 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:58 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:58 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:42:58 compute-1 ceph-mon[80135]: Deploying daemon haproxy.nfs.cephfs.compute-1.iwomei on compute-1
Nov 23 20:42:59 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:00 compute-1 ceph-mon[80135]: pgmap v38: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.4 KiB/s wr, 63 op/s
Nov 23 20:43:00 compute-1 podman[85815]: 2025-11-23 20:43:00.632821732 +0000 UTC m=+2.966877935 container create 9873f14bf1083298c9392744801ca342cd89baf36930faa141afa81364969b82 (image=quay.io/ceph/haproxy:2.3, name=sad_lehmann)
Nov 23 20:43:00 compute-1 systemd[1]: Started libpod-conmon-9873f14bf1083298c9392744801ca342cd89baf36930faa141afa81364969b82.scope.
Nov 23 20:43:00 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:43:00 compute-1 podman[85815]: 2025-11-23 20:43:00.618223107 +0000 UTC m=+2.952279330 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 23 20:43:00 compute-1 podman[85815]: 2025-11-23 20:43:00.700535242 +0000 UTC m=+3.034591445 container init 9873f14bf1083298c9392744801ca342cd89baf36930faa141afa81364969b82 (image=quay.io/ceph/haproxy:2.3, name=sad_lehmann)
Nov 23 20:43:00 compute-1 podman[85815]: 2025-11-23 20:43:00.706250401 +0000 UTC m=+3.040306604 container start 9873f14bf1083298c9392744801ca342cd89baf36930faa141afa81364969b82 (image=quay.io/ceph/haproxy:2.3, name=sad_lehmann)
Nov 23 20:43:00 compute-1 podman[85815]: 2025-11-23 20:43:00.708692051 +0000 UTC m=+3.042748254 container attach 9873f14bf1083298c9392744801ca342cd89baf36930faa141afa81364969b82 (image=quay.io/ceph/haproxy:2.3, name=sad_lehmann)
Nov 23 20:43:00 compute-1 sad_lehmann[85930]: 0 0
Nov 23 20:43:00 compute-1 systemd[1]: libpod-9873f14bf1083298c9392744801ca342cd89baf36930faa141afa81364969b82.scope: Deactivated successfully.
Nov 23 20:43:00 compute-1 podman[85815]: 2025-11-23 20:43:00.711313424 +0000 UTC m=+3.045369627 container died 9873f14bf1083298c9392744801ca342cd89baf36930faa141afa81364969b82 (image=quay.io/ceph/haproxy:2.3, name=sad_lehmann)
Nov 23 20:43:00 compute-1 systemd[1]: var-lib-containers-storage-overlay-2b5425586260c5292cbb2df48e662a158f895e72f187fbdf44270cda1f0c3018-merged.mount: Deactivated successfully.
Nov 23 20:43:00 compute-1 podman[85815]: 2025-11-23 20:43:00.757178681 +0000 UTC m=+3.091234884 container remove 9873f14bf1083298c9392744801ca342cd89baf36930faa141afa81364969b82 (image=quay.io/ceph/haproxy:2.3, name=sad_lehmann)
Nov 23 20:43:00 compute-1 systemd[1]: libpod-conmon-9873f14bf1083298c9392744801ca342cd89baf36930faa141afa81364969b82.scope: Deactivated successfully.
Nov 23 20:43:00 compute-1 systemd[1]: Reloading.
Nov 23 20:43:00 compute-1 systemd-rc-local-generator[85972]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:43:00 compute-1 systemd-sysv-generator[85979]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:43:01 compute-1 systemd[1]: Reloading.
Nov 23 20:43:01 compute-1 systemd-rc-local-generator[86016]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:43:01 compute-1 systemd-sysv-generator[86020]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:43:01 compute-1 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-1.iwomei for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 20:43:01 compute-1 podman[86074]: 2025-11-23 20:43:01.523399798 +0000 UTC m=+0.036995672 container create 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei)
Nov 23 20:43:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99f9df31263d11eb37a2048e9a899a06412245a22ed7b0de6d53ce609478ae00/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Nov 23 20:43:01 compute-1 podman[86074]: 2025-11-23 20:43:01.569423779 +0000 UTC m=+0.083019673 container init 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei)
Nov 23 20:43:01 compute-1 podman[86074]: 2025-11-23 20:43:01.573631241 +0000 UTC m=+0.087227115 container start 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei)
Nov 23 20:43:01 compute-1 bash[86074]: 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb
Nov 23 20:43:01 compute-1 podman[86074]: 2025-11-23 20:43:01.507727446 +0000 UTC m=+0.021323340 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 23 20:43:01 compute-1 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-1.iwomei for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 20:43:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [NOTICE] 326/204301 (2) : New worker #1 (4) forked
Nov 23 20:43:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:01 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:01 compute-1 sudo[85750]: pam_unix(sudo:session): session closed for user root
Nov 23 20:43:01 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:43:02 compute-1 ceph-mon[80135]: pgmap v39: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 2.4 KiB/s wr, 67 op/s
Nov 23 20:43:02 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:02 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:02 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:03 compute-1 ceph-mon[80135]: Deploying daemon haproxy.nfs.cephfs.compute-0.uvukit on compute-0
Nov 23 20:43:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:03 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:04 compute-1 ceph-mon[80135]: pgmap v40: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Nov 23 20:43:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:06 compute-1 ceph-mon[80135]: pgmap v41: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Nov 23 20:43:06 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:06 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:06 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:06 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:43:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6950001da0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:07 compute-1 ceph-mon[80135]: Deploying daemon haproxy.nfs.cephfs.compute-2.dxqoem on compute-2
Nov 23 20:43:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:08 compute-1 ceph-mon[80135]: pgmap v42: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Nov 23 20:43:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:09 compute-1 ceph-mon[80135]: pgmap v43: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1023 B/s wr, 4 op/s
Nov 23 20:43:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:10 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:10 compute-1 sudo[86105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:43:10 compute-1 sudo[86105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:43:10 compute-1 sudo[86105]: pam_unix(sudo:session): session closed for user root
Nov 23 20:43:10 compute-1 sudo[86130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:43:10 compute-1 sudo[86130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:43:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69300016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:11 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:11 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:11 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:11 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500028a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:11 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:43:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:12 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:12 compute-1 ceph-mon[80135]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 23 20:43:12 compute-1 ceph-mon[80135]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 23 20:43:12 compute-1 ceph-mon[80135]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 23 20:43:12 compute-1 ceph-mon[80135]: Deploying daemon keepalived.nfs.cephfs.compute-1.lwmzxc on compute-1
Nov 23 20:43:12 compute-1 ceph-mon[80135]: pgmap v44: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1023 B/s wr, 4 op/s
Nov 23 20:43:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:13 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 23 20:43:13 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e51 e51: 3 total, 3 up, 3 in
Nov 23 20:43:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69300016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:14 compute-1 podman[86196]: 2025-11-23 20:43:14.143753947 +0000 UTC m=+3.154170866 container create 7cd15430a35da61fca1d997bd4e2ca801ba61ed44b801bb28f09fcf5c663991f (image=quay.io/ceph/keepalived:2.2.4, name=pensive_wu, architecture=x86_64, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, release=1793, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., vcs-type=git, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, name=keepalived, distribution-scope=public, io.openshift.expose-services=)
Nov 23 20:43:14 compute-1 systemd[1]: Started libpod-conmon-7cd15430a35da61fca1d997bd4e2ca801ba61ed44b801bb28f09fcf5c663991f.scope.
Nov 23 20:43:14 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:43:14 compute-1 podman[86196]: 2025-11-23 20:43:14.125221046 +0000 UTC m=+3.135637995 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 23 20:43:14 compute-1 podman[86196]: 2025-11-23 20:43:14.216747904 +0000 UTC m=+3.227164823 container init 7cd15430a35da61fca1d997bd4e2ca801ba61ed44b801bb28f09fcf5c663991f (image=quay.io/ceph/keepalived:2.2.4, name=pensive_wu, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.buildah.version=1.28.2, release=1793, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, com.redhat.component=keepalived-container, description=keepalived for Ceph, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4)
Nov 23 20:43:14 compute-1 podman[86196]: 2025-11-23 20:43:14.224795421 +0000 UTC m=+3.235212320 container start 7cd15430a35da61fca1d997bd4e2ca801ba61ed44b801bb28f09fcf5c663991f (image=quay.io/ceph/keepalived:2.2.4, name=pensive_wu, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, vcs-type=git, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.openshift.expose-services=, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 20:43:14 compute-1 podman[86196]: 2025-11-23 20:43:14.228639004 +0000 UTC m=+3.239055893 container attach 7cd15430a35da61fca1d997bd4e2ca801ba61ed44b801bb28f09fcf5c663991f (image=quay.io/ceph/keepalived:2.2.4, name=pensive_wu, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, version=2.2.4, distribution-scope=public, vcs-type=git, io.openshift.tags=Ceph keepalived, architecture=x86_64, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, release=1793, vendor=Red Hat, Inc.)
Nov 23 20:43:14 compute-1 pensive_wu[86289]: 0 0
Nov 23 20:43:14 compute-1 systemd[1]: libpod-7cd15430a35da61fca1d997bd4e2ca801ba61ed44b801bb28f09fcf5c663991f.scope: Deactivated successfully.
Nov 23 20:43:14 compute-1 conmon[86289]: conmon 7cd15430a35da61fca1d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7cd15430a35da61fca1d997bd4e2ca801ba61ed44b801bb28f09fcf5c663991f.scope/container/memory.events
Nov 23 20:43:14 compute-1 podman[86196]: 2025-11-23 20:43:14.232757015 +0000 UTC m=+3.243173924 container died 7cd15430a35da61fca1d997bd4e2ca801ba61ed44b801bb28f09fcf5c663991f (image=quay.io/ceph/keepalived:2.2.4, name=pensive_wu, vendor=Red Hat, Inc., version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, vcs-type=git, io.openshift.tags=Ceph keepalived, architecture=x86_64, io.buildah.version=1.28.2, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 23 20:43:14 compute-1 systemd[1]: var-lib-containers-storage-overlay-3c3c43ee5078ece2bf91e0477512df0f6301ada03b2edfc0062513138b9a6431-merged.mount: Deactivated successfully.
Nov 23 20:43:14 compute-1 podman[86196]: 2025-11-23 20:43:14.273516137 +0000 UTC m=+3.283933036 container remove 7cd15430a35da61fca1d997bd4e2ca801ba61ed44b801bb28f09fcf5c663991f (image=quay.io/ceph/keepalived:2.2.4, name=pensive_wu, build-date=2023-02-22T09:23:20, version=2.2.4, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, release=1793, description=keepalived for Ceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container)
Nov 23 20:43:14 compute-1 systemd[1]: libpod-conmon-7cd15430a35da61fca1d997bd4e2ca801ba61ed44b801bb28f09fcf5c663991f.scope: Deactivated successfully.
Nov 23 20:43:14 compute-1 systemd[1]: Reloading.
Nov 23 20:43:14 compute-1 systemd-sysv-generator[86341]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:43:14 compute-1 systemd-rc-local-generator[86336]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:43:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:14 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69300016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:14 compute-1 ceph-mon[80135]: pgmap v45: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:43:14 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 23 20:43:14 compute-1 ceph-mon[80135]: osdmap e51: 3 total, 3 up, 3 in
Nov 23 20:43:14 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 20:43:14 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e52 e52: 3 total, 3 up, 3 in
Nov 23 20:43:14 compute-1 systemd[1]: Reloading.
Nov 23 20:43:14 compute-1 systemd-sysv-generator[86380]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:43:14 compute-1 systemd-rc-local-generator[86376]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:43:14 compute-1 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-1.lwmzxc for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 20:43:15 compute-1 podman[86434]: 2025-11-23 20:43:15.033747429 +0000 UTC m=+0.021280659 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 23 20:43:15 compute-1 podman[86434]: 2025-11-23 20:43:15.199238719 +0000 UTC m=+0.186771929 container create 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, version=2.2.4, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, release=1793, vendor=Red Hat, Inc.)
Nov 23 20:43:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d71e36f41e8d5e45c92a16bf1811bab47b38b430083dfb9b697e6aa3e7e1a93e/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 20:43:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500028a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:15 compute-1 podman[86434]: 2025-11-23 20:43:15.406003844 +0000 UTC m=+0.393537084 container init 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, version=2.2.4, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, distribution-scope=public, release=1793, com.redhat.component=keepalived-container)
Nov 23 20:43:15 compute-1 podman[86434]: 2025-11-23 20:43:15.411005255 +0000 UTC m=+0.398538475 container start 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, version=2.2.4, architecture=x86_64, io.buildah.version=1.28.2, description=keepalived for Ceph, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, release=1793, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived)
Nov 23 20:43:15 compute-1 bash[86434]: 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3
Nov 23 20:43:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc[86449]: Sun Nov 23 20:43:15 2025: Starting Keepalived v2.2.4 (08/21,2021)
Nov 23 20:43:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc[86449]: Sun Nov 23 20:43:15 2025: Running on Linux 5.14.0-639.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025 (built for Linux 5.14.0)
Nov 23 20:43:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc[86449]: Sun Nov 23 20:43:15 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Nov 23 20:43:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc[86449]: Sun Nov 23 20:43:15 2025: Configuration file /etc/keepalived/keepalived.conf
Nov 23 20:43:15 compute-1 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-1.lwmzxc for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 20:43:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc[86449]: Sun Nov 23 20:43:15 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Nov 23 20:43:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc[86449]: Sun Nov 23 20:43:15 2025: Starting VRRP child process, pid=4
Nov 23 20:43:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc[86449]: Sun Nov 23 20:43:15 2025: Startup complete
Nov 23 20:43:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc[86449]: Sun Nov 23 20:43:15 2025: (VI_0) Entering BACKUP STATE (init)
Nov 23 20:43:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc[86449]: Sun Nov 23 20:43:15 2025: VRRP_Script(check_backend) succeeded
Nov 23 20:43:15 compute-1 sudo[86130]: pam_unix(sudo:session): session closed for user root
Nov 23 20:43:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:15 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e53 e53: 3 total, 3 up, 3 in
Nov 23 20:43:15 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 23 20:43:15 compute-1 ceph-mon[80135]: osdmap e52: 3 total, 3 up, 3 in
Nov 23 20:43:15 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 20:43:15 compute-1 ceph-mon[80135]: pgmap v48: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Nov 23 20:43:15 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 20:43:15 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 23 20:43:15 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 53 pg[7.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=53 pruub=13.259194374s) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active pruub 174.180419922s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:15 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 53 pg[7.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=53 pruub=13.259194374s) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown pruub 174.180419922s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:16 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69300016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:16 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:43:16 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e54 e54: 3 total, 3 up, 3 in
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.16( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.15( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.a( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.c( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.4( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1f( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1d( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1c( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.13( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.10( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.17( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.11( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.b( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.9( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.8( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.14( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.e( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.6( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.12( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.5( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.7( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.3( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.2( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.d( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.f( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1e( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.19( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.18( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1b( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1a( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.16( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.15( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.a( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.c( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1f( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.4( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1d( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.13( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1c( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.10( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.17( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.11( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.9( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.8( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.b( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.e( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.14( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.6( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.0( empty local-lis/les=53/54 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.5( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.d( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.7( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.2( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1e( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.f( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.18( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1a( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.12( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1b( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.19( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.3( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:16 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:16 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Nov 23 20:43:16 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 20:43:16 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 23 20:43:16 compute-1 ceph-mon[80135]: osdmap e53: 3 total, 3 up, 3 in
Nov 23 20:43:16 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 20:43:16 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:16 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:16 compute-1 ceph-mon[80135]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 23 20:43:16 compute-1 ceph-mon[80135]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 23 20:43:16 compute-1 ceph-mon[80135]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 23 20:43:16 compute-1 ceph-mon[80135]: Deploying daemon keepalived.nfs.cephfs.compute-0.spcytb on compute-0
Nov 23 20:43:17 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.15 deep-scrub starts
Nov 23 20:43:17 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.15 deep-scrub ok
Nov 23 20:43:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69300016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500028a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e55 e55: 3 total, 3 up, 3 in
Nov 23 20:43:17 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 23 20:43:17 compute-1 ceph-mon[80135]: osdmap e54: 3 total, 3 up, 3 in
Nov 23 20:43:17 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 20:43:17 compute-1 ceph-mon[80135]: pgmap v51: 182 pgs: 46 unknown, 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:43:17 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 20:43:17 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 20:43:17 compute-1 ceph-mon[80135]: 7.15 deep-scrub starts
Nov 23 20:43:17 compute-1 ceph-mon[80135]: 7.15 deep-scrub ok
Nov 23 20:43:18 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 23 20:43:18 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 23 20:43:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:18 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:18 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e56 e56: 3 total, 3 up, 3 in
Nov 23 20:43:18 compute-1 ceph-mon[80135]: 6.c scrub starts
Nov 23 20:43:18 compute-1 ceph-mon[80135]: 6.c scrub ok
Nov 23 20:43:18 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 23 20:43:18 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 20:43:18 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 20:43:18 compute-1 ceph-mon[80135]: osdmap e55: 3 total, 3 up, 3 in
Nov 23 20:43:18 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 20:43:18 compute-1 ceph-mon[80135]: 7.16 scrub starts
Nov 23 20:43:18 compute-1 ceph-mon[80135]: 7.16 scrub ok
Nov 23 20:43:18 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 23 20:43:18 compute-1 ceph-mon[80135]: osdmap e56: 3 total, 3 up, 3 in
Nov 23 20:43:18 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.a deep-scrub starts
Nov 23 20:43:18 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.a deep-scrub ok
Nov 23 20:43:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc[86449]: Sun Nov 23 20:43:19 2025: (VI_0) Entering MASTER STATE
Nov 23 20:43:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:19 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Nov 23 20:43:20 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Nov 23 20:43:20 compute-1 sshd-session[86458]: Invalid user yhli from 102.176.81.29 port 45342
Nov 23 20:43:20 compute-1 sshd-session[86458]: Received disconnect from 102.176.81.29 port 45342:11: Bye Bye [preauth]
Nov 23 20:43:20 compute-1 sshd-session[86458]: Disconnected from invalid user yhli 102.176.81.29 port 45342 [preauth]
Nov 23 20:43:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:20 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:20 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e57 e57: 3 total, 3 up, 3 in
Nov 23 20:43:20 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 57 pg[10.0( v 50'991 (0'0,50'991] local-lis/les=44/45 n=178 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=57 pruub=14.416520119s) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 50'990 mlcod 50'990 active pruub 180.095977783s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:20 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 57 pg[10.0( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=57 pruub=14.416520119s) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 50'990 mlcod 0'0 unknown pruub 180.095977783s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2e6f28 space 0x55805b2e29d0 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2d5428 space 0x55805b0e4aa0 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2d4528 space 0x55805b2ad2c0 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b320168 space 0x55805b2e2760 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b3207a8 space 0x55805b345390 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2e7f68 space 0x55805b2e2aa0 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b33cc08 space 0x55805b2ad940 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2fe028 space 0x55805b2ad6d0 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2d4668 space 0x55805b12ede0 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b321d88 space 0x55805b345e20 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b1063e8 space 0x55805b2e36d0 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2e7248 space 0x55805b2ad530 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b33c528 space 0x55805b2ad7a0 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2e6348 space 0x55805b2e3d50 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2d47a8 space 0x55805b2ad600 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2e7568 space 0x55805b2e2eb0 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b320208 space 0x55805b2adae0 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b321ba8 space 0x55805b344280 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b302708 space 0x55805b2e32c0 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2d43e8 space 0x55805b2ad390 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805adb7c48 space 0x55805b12f940 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2e72e8 space 0x55805b2ac280 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2fede8 space 0x55805b374900 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2e59c8 space 0x55805b2e3870 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b30b248 space 0x55805b344010 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b320a28 space 0x55805b2ad460 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b0e2208 space 0x55805b2e2690 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b33c5c8 space 0x55805b2ad870 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805ad6f108 space 0x55805b2ad1f0 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b30ac08 space 0x55805b2e3ef0 0x0~1000 clean)
Nov 23 20:43:20 compute-1 ceph-mon[80135]: 6.b scrub starts
Nov 23 20:43:20 compute-1 ceph-mon[80135]: 6.b scrub ok
Nov 23 20:43:20 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 20:43:20 compute-1 ceph-mon[80135]: pgmap v54: 244 pgs: 108 unknown, 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:43:20 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 20:43:20 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 20:43:20 compute-1 ceph-mon[80135]: 7.a deep-scrub starts
Nov 23 20:43:20 compute-1 ceph-mon[80135]: 7.a deep-scrub ok
Nov 23 20:43:21 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 23 20:43:21 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 23 20:43:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:21 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e58 e58: 3 total, 3 up, 3 in
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.17( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.15( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.16( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.14( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.13( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.2( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.f( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.e( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.c( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.a( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.d( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.8( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.3( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.b( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.5( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.4( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.6( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[12.0( empty local-lis/les=48/49 n=0 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=58 pruub=9.881776810s) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active pruub 176.305847168s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.19( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1a( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1c( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1d( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1f( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1e( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.10( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.12( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.9( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.7( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.18( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1b( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.11( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[12.0( empty local-lis/les=48/49 n=0 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=58 pruub=9.881776810s) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown pruub 176.305847168s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.14( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.13( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.15( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.2( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.0( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 50'990 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.17( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.8( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.b( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.5( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.c( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.3( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.4( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1c( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.10( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.7( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.12( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.9( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.18( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.11( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500028a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:21 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:43:21 compute-1 ceph-mon[80135]: 6.9 scrub starts
Nov 23 20:43:21 compute-1 ceph-mon[80135]: 6.9 scrub ok
Nov 23 20:43:21 compute-1 ceph-mon[80135]: 7.1f scrub starts
Nov 23 20:43:21 compute-1 ceph-mon[80135]: 7.1f scrub ok
Nov 23 20:43:21 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 23 20:43:21 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 20:43:21 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 20:43:21 compute-1 ceph-mon[80135]: osdmap e57: 3 total, 3 up, 3 in
Nov 23 20:43:21 compute-1 ceph-mon[80135]: 6.f scrub starts
Nov 23 20:43:21 compute-1 ceph-mon[80135]: 6.f scrub ok
Nov 23 20:43:21 compute-1 ceph-mon[80135]: pgmap v56: 306 pgs: 62 unknown, 244 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:43:21 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 20:43:21 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:21 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:21 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:21 compute-1 ceph-mon[80135]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 23 20:43:21 compute-1 ceph-mon[80135]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 23 20:43:21 compute-1 ceph-mon[80135]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 23 20:43:21 compute-1 ceph-mon[80135]: Deploying daemon keepalived.nfs.cephfs.compute-2.cpybdt on compute-2
Nov 23 20:43:21 compute-1 ceph-mon[80135]: 7.c scrub starts
Nov 23 20:43:21 compute-1 ceph-mon[80135]: 7.c scrub ok
Nov 23 20:43:21 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 20:43:21 compute-1 ceph-mon[80135]: osdmap e58: 3 total, 3 up, 3 in
Nov 23 20:43:22 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 23 20:43:22 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 23 20:43:22 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e59 e59: 3 total, 3 up, 3 in
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.11( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.10( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.13( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.4( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.12( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.15( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.6( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.9( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.8( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.a( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.c( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.b( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.e( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.5( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.2( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.d( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.3( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1f( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1c( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1a( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1b( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.18( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.19( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.16( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.14( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.f( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.7( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1e( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1d( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.17( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.11( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:22 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009e30 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.10( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.13( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.4( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.15( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.12( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.6( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.9( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.8( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.c( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.a( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.b( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.e( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.2( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.5( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.d( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.3( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.0( empty local-lis/les=58/59 n=0 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1b( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1a( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.14( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.16( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1c( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1f( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.19( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1e( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.f( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.7( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.18( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1d( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.17( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:23 compute-1 ceph-mon[80135]: 6.a scrub starts
Nov 23 20:43:23 compute-1 ceph-mon[80135]: 6.a scrub ok
Nov 23 20:43:23 compute-1 ceph-mon[80135]: 7.1d scrub starts
Nov 23 20:43:23 compute-1 ceph-mon[80135]: 7.1d scrub ok
Nov 23 20:43:23 compute-1 ceph-mon[80135]: osdmap e59: 3 total, 3 up, 3 in
Nov 23 20:43:23 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 23 20:43:23 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 23 20:43:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009e30 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69300032f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc[86449]: Sun Nov 23 20:43:23 2025: (VI_0) Master received advert from 192.168.122.100 with higher priority 100, ours 90
Nov 23 20:43:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc[86449]: Sun Nov 23 20:43:23 2025: (VI_0) Entering BACKUP STATE
Nov 23 20:43:24 compute-1 ceph-mon[80135]: 6.3 scrub starts
Nov 23 20:43:24 compute-1 ceph-mon[80135]: 6.3 scrub ok
Nov 23 20:43:24 compute-1 ceph-mon[80135]: pgmap v59: 337 pgs: 93 unknown, 244 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:43:24 compute-1 ceph-mon[80135]: 7.4 scrub starts
Nov 23 20:43:24 compute-1 ceph-mon[80135]: 7.4 scrub ok
Nov 23 20:43:24 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:24 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Nov 23 20:43:24 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Nov 23 20:43:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:24 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500028a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:25 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 23 20:43:25 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 23 20:43:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009e30 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:26 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.1c deep-scrub starts
Nov 23 20:43:26 compute-1 ceph-mon[80135]: 6.5 scrub starts
Nov 23 20:43:26 compute-1 ceph-mon[80135]: 6.5 scrub ok
Nov 23 20:43:26 compute-1 ceph-mon[80135]: 7.13 scrub starts
Nov 23 20:43:26 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.1c deep-scrub ok
Nov 23 20:43:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:26 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:26 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:43:26 compute-1 ceph-mon[80135]: 7.13 scrub ok
Nov 23 20:43:26 compute-1 ceph-mon[80135]: 6.4 scrub starts
Nov 23 20:43:26 compute-1 ceph-mon[80135]: 6.4 scrub ok
Nov 23 20:43:26 compute-1 ceph-mon[80135]: pgmap v60: 337 pgs: 93 unknown, 244 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:43:26 compute-1 ceph-mon[80135]: 7.10 scrub starts
Nov 23 20:43:26 compute-1 ceph-mon[80135]: 7.10 scrub ok
Nov 23 20:43:26 compute-1 ceph-mon[80135]: 6.0 scrub starts
Nov 23 20:43:26 compute-1 ceph-mon[80135]: 6.0 scrub ok
Nov 23 20:43:26 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:26 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:26 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:26 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:26 compute-1 ceph-mon[80135]: Deploying daemon alertmanager.compute-0 on compute-0
Nov 23 20:43:26 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 20:43:26 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 20:43:26 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 20:43:26 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 23 20:43:26 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 20:43:26 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 23 20:43:26 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 20:43:27 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 23 20:43:27 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 23 20:43:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500028a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:27 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e60 e60: 3 total, 3 up, 3 in
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[9.12( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[8.19( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[11.1a( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[9.a( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[8.10( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[9.11( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[8.12( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[11.1e( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[11.1c( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[11.1d( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[8.18( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[11.1b( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[8.1b( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[11.7( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[8.4( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[11.4( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[11.5( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[8.8( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[9.f( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[11.f( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[9.d( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[11.1( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[11.12( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[9.10( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[11.14( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[8.17( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[8.14( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[9.15( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.11( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.984288216s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.433227539s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.10( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.989400864s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.438369751s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.10( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.989379883s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.438369751s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.18( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.367900848s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.817062378s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.11( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.984259605s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.433227539s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.18( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.367888451s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.817062378s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.17( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.982778549s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.431915283s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.15( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.982155800s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.431396484s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.15( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.982145309s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.431396484s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.13( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.989006996s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.438354492s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.1b( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.367665291s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.817108154s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.17( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.982691765s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.431915283s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.13( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988914490s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.438354492s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.1b( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.367646217s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.817108154s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.12( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988936424s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.438629150s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.12( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988898277s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.438629150s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.1e( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.367081642s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816986084s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.13( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.981494904s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.431411743s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.f( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.367024422s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816986084s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.4( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988358498s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.438400269s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.f( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.366985321s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816986084s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.4( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988343239s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.438400269s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.13( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.981382370s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.431411743s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.1e( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.366968155s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816986084s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.981339455s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.431854248s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.981307030s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.431854248s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.6( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988631248s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.438995361s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.9( v 59'1 (0'0,59'1] local-lis/les=58/59 n=1 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988500595s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=59'1 lcod 0'0 mlcod 0'0 active pruub 183.439117432s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.9( v 59'1 (0'0,59'1] local-lis/les=58/59 n=1 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988471985s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=59'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.439117432s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.6( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988383293s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.438995361s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.2( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.366314888s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.817001343s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.3( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.366045952s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816894531s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.8( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988273621s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.439132690s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.2( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.366248131s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.817001343s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.3( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.366029739s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816894531s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.a( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988091469s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.439178467s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.8( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988256454s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.439132690s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.a( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988065720s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.439178467s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.c( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.987947464s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.439163208s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.c( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.987930298s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.439163208s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.980611801s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.431869507s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.980578423s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.431869507s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.5( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.365480423s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816894531s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.5( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.365449905s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816894531s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.e( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.987771988s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.439315796s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.e( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.987749100s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.439315796s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.6( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.364793777s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816421509s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.b( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.987556458s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.439285278s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.6( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.364755630s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816421509s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.3( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.980506897s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.432418823s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.b( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.980226517s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.432052612s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.b( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.987535477s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.439285278s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.e( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.364359856s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816406250s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.3( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.980488777s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.432418823s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.e( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.364325523s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816406250s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.b( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.980042458s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.432052612s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.2( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.987216949s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.439407349s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.9( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.364044189s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816329956s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.9( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.364027023s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816329956s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.2( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.987140656s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.439407349s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.5( v 59'994 (0'0,59'994] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.979757309s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=58'992 lcod 58'993 mlcod 58'993 active pruub 182.432083130s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.8( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.363799095s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816345215s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.8( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.363780975s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816345215s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.5( v 59'994 (0'0,59'994] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.979698181s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=58'992 lcod 58'993 mlcod 0'0 unknown NOTIFY pruub 182.432083130s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.b( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.363586426s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816345215s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.3( v 59'1 (0'0,59'1] local-lis/les=58/59 n=1 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986955643s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=59'1 lcod 0'0 mlcod 0'0 active pruub 183.439453125s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.3( v 59'1 (0'0,59'1] local-lis/les=58/59 n=1 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986624718s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=59'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.439453125s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.b( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.363509178s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816345215s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.1c( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988234520s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.441223145s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.979729652s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.432739258s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.11( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.363109589s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816329956s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.14( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.363191605s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816421509s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.1c( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988133430s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.441223145s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.1a( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986459732s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.439743042s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.14( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.363083839s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816421509s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.979521751s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.432739258s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.1a( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986246109s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.439743042s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.11( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.363082886s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816329956s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.10( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.362651825s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816238403s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.10( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.362511635s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816238403s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.978612900s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.432525635s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.18( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988206863s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.442138672s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.18( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988192558s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.442138672s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.978567123s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.432525635s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.13( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.362186432s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816238403s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.13( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.362171173s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816238403s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.19( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986949921s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.441360474s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.1f( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.361430168s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.815872192s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.978149414s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.432601929s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.1f( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.361415863s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.815872192s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.19( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986902237s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.441360474s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.978117943s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.432601929s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.1d( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.361785889s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816192627s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.1d( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.361577034s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816192627s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.9( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.977862358s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.432693481s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.a( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.360968590s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.815765381s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.9( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.977847099s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.432693481s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.a( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.360881805s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.815765381s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.7( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986898422s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.441864014s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.7( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986882210s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.441864014s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.7( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.977659225s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.432678223s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.1( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.977678299s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.432769775s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.7( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.977610588s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.432678223s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.1( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.977665901s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.432769775s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.16( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.355856895s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.811096191s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.16( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.355841637s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.811096191s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.4( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.360867500s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816238403s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.1e( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986061096s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.441467285s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.1e( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986042976s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.441467285s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.1d( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986731529s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.442169189s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.977339745s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.432830811s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.17( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986633301s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.442184448s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.17( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986579895s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.442184448s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.1d( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986688614s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.442169189s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.977225304s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.432830811s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.4( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.360697746s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816238403s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.11( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.976782799s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.432846069s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:27 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.11( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.975978851s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.432846069s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:28 compute-1 ceph-mon[80135]: 7.1c deep-scrub starts
Nov 23 20:43:28 compute-1 ceph-mon[80135]: 7.1c deep-scrub ok
Nov 23 20:43:28 compute-1 ceph-mon[80135]: 6.8 scrub starts
Nov 23 20:43:28 compute-1 ceph-mon[80135]: 6.8 scrub ok
Nov 23 20:43:28 compute-1 ceph-mon[80135]: pgmap v61: 337 pgs: 337 active+clean; 458 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 3 op/s
Nov 23 20:43:28 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 20:43:28 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 20:43:28 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 20:43:28 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 23 20:43:28 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 20:43:28 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 23 20:43:28 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 20:43:28 compute-1 ceph-mon[80135]: osdmap e60: 3 total, 3 up, 3 in
Nov 23 20:43:28 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 23 20:43:28 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 23 20:43:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:28 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009e30 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:28 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e61 e61: 3 total, 3 up, 3 in
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.17( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.17( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.15( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.15( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.13( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.13( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.b( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.b( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.3( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.3( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.5( v 59'994 (0'0,59'994] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=58'992 lcod 58'993 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.5( v 59'994 (0'0,59'994] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=58'992 lcod 58'993 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.9( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.9( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.1( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.1( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.7( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.11( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.7( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.11( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[9.15( v 43'12 (0'0,43'12] local-lis/les=60/61 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[8.14( v 50'45 (0'0,50'45] local-lis/les=60/61 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[8.17( v 50'45 (0'0,50'45] local-lis/les=60/61 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[11.14( v 59'57 lc 59'56 (0'0,59'57] local-lis/les=60/61 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=59'57 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[11.12( v 47'48 (0'0,47'48] local-lis/les=60/61 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[9.10( v 43'12 (0'0,43'12] local-lis/les=60/61 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[11.1( v 47'48 (0'0,47'48] local-lis/les=60/61 n=1 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[11.f( v 47'48 (0'0,47'48] local-lis/les=60/61 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[9.f( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=60/61 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=43'12 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[9.d( v 43'12 (0'0,43'12] local-lis/les=60/61 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[8.8( v 50'45 (0'0,50'45] local-lis/les=60/61 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[9.e( v 43'12 (0'0,43'12] local-lis/les=60/61 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[11.5( v 47'48 (0'0,47'48] local-lis/les=60/61 n=1 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[11.4( v 47'48 (0'0,47'48] local-lis/les=60/61 n=1 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[11.7( v 47'48 (0'0,47'48] local-lis/les=60/61 n=1 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[8.4( v 50'45 (0'0,50'45] local-lis/les=60/61 n=1 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[9.6( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=60/61 n=1 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=43'12 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[8.1b( v 50'45 lc 50'8 (0'0,50'45] local-lis/les=60/61 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[8.18( v 50'45 lc 50'19 (0'0,50'45] local-lis/les=60/61 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[11.1d( v 47'48 (0'0,47'48] local-lis/les=60/61 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[11.1b( v 47'48 (0'0,47'48] local-lis/les=60/61 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[11.1c( v 47'48 (0'0,47'48] local-lis/les=60/61 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[8.12( v 50'45 (0'0,50'45] local-lis/les=60/61 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[8.10( v 58'48 lc 50'14 (0'0,58'48] local-lis/les=60/61 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=58'48 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[9.11( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=60/61 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=43'12 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[9.a( v 43'12 (0'0,43'12] local-lis/les=60/61 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[11.1e( v 47'48 (0'0,47'48] local-lis/les=60/61 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[8.19( v 50'45 (0'0,50'45] local-lis/les=60/61 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[9.12( v 43'12 (0'0,43'12] local-lis/les=60/61 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:28 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[11.1a( v 47'48 (0'0,47'48] local-lis/les=60/61 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:29 compute-1 ceph-mon[80135]: 7.11 scrub starts
Nov 23 20:43:29 compute-1 ceph-mon[80135]: 7.11 scrub ok
Nov 23 20:43:29 compute-1 ceph-mon[80135]: 9.14 scrub starts
Nov 23 20:43:29 compute-1 ceph-mon[80135]: 9.14 scrub ok
Nov 23 20:43:29 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:29 compute-1 ceph-mon[80135]: osdmap e61: 3 total, 3 up, 3 in
Nov 23 20:43:29 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 23 20:43:29 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 23 20:43:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:29 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e62 e62: 3 total, 3 up, 3 in
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.955586433s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.431549072s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.2( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.955795288s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.431884766s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.2( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.955766678s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.431884766s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.955229759s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.431549072s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.955196381s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.432067871s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.955163002s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.432067871s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.955307961s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.432479858s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.955270767s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.432479858s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.954610825s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.432510376s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.954590797s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.432510376s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.954823494s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.433013916s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.954804420s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.433013916s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.954584122s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.433151245s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.954562187s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.433151245s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.12( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.954124451s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.433120728s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.12( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.954091072s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.433120728s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.17( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.13( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[6.6( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[6.2( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[6.e( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[6.a( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.15( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.5( v 59'994 (0'0,59'994] local-lis/les=61/62 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=59'994 lcod 58'993 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.b( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.1( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.9( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.3( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.11( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:29 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.7( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500028a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:30 compute-1 ceph-mon[80135]: 7.1a scrub starts
Nov 23 20:43:30 compute-1 ceph-mon[80135]: 7.1a scrub ok
Nov 23 20:43:30 compute-1 ceph-mon[80135]: 11.0 scrub starts
Nov 23 20:43:30 compute-1 ceph-mon[80135]: pgmap v64: 337 pgs: 337 active+clean; 458 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 4 op/s
Nov 23 20:43:30 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 23 20:43:30 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 23 20:43:30 compute-1 ceph-mon[80135]: osdmap e62: 3 total, 3 up, 3 in
Nov 23 20:43:30 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:30 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:30 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e63 e63: 3 total, 3 up, 3 in
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.12( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.1( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.992103577s) [2] async=[2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.489990234s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.12( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.1( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.992055893s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.489990234s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.991169930s) [2] async=[2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.489715576s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.991127968s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.489715576s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.990653038s) [2] async=[2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.489517212s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.990526199s) [2] async=[2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.489685059s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.2( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.2( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.990082741s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.489517212s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.990076065s) [2] async=[2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.489517212s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.13( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.986392975s) [2] async=[2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.485916138s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.13( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.986267090s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.485916138s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.989757538s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.489517212s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.15( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.989747047s) [2] async=[2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.489562988s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.990136147s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.489685059s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.15( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.989697456s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.489562988s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.b( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.989837646s) [2] async=[2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.489913940s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.17( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.985430717s) [2] async=[2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.485565186s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.b( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.989803314s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.489913940s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.17( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.985372543s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.485565186s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.989325523s) [2] async=[2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.489791870s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.989226341s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.489791870s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.5( v 62'998 (0'0,62'998] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.988718987s) [2] async=[2] r=-1 lpr=63 pi=[57,63)/1 crt=59'994 lcod 62'997 mlcod 62'997 active pruub 190.489791870s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.5( v 62'998 (0'0,62'998] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.988603592s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=59'994 lcod 62'997 mlcod 0'0 unknown NOTIFY pruub 190.489791870s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[6.a( v 49'39 (0'0,49'39] local-lis/les=62/63 n=0 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=49'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[6.6( v 49'39 lc 0'0 (0'0,49'39] local-lis/les=62/63 n=2 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=49'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[6.2( v 49'39 (0'0,49'39] local-lis/les=62/63 n=2 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=49'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[6.e( v 49'39 lc 48'19 (0'0,49'39] local-lis/les=62/63 n=1 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=49'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:31 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 6.2 deep-scrub starts
Nov 23 20:43:31 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 6.2 deep-scrub ok
Nov 23 20:43:31 compute-1 ceph-mon[80135]: 11.0 scrub ok
Nov 23 20:43:31 compute-1 ceph-mon[80135]: 9.2 deep-scrub starts
Nov 23 20:43:31 compute-1 ceph-mon[80135]: 9.2 deep-scrub ok
Nov 23 20:43:31 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:31 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:31 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:31 compute-1 ceph-mon[80135]: Regenerating cephadm self-signed grafana TLS certificates
Nov 23 20:43:31 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:31 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:31 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Nov 23 20:43:31 compute-1 ceph-mon[80135]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Nov 23 20:43:31 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:31 compute-1 ceph-mon[80135]: Deploying daemon grafana.compute-0 on compute-0
Nov 23 20:43:31 compute-1 ceph-mon[80135]: osdmap e63: 3 total, 3 up, 3 in
Nov 23 20:43:31 compute-1 ceph-mon[80135]: 6.7 scrub starts
Nov 23 20:43:31 compute-1 ceph-mon[80135]: 6.7 scrub ok
Nov 23 20:43:31 compute-1 ceph-mon[80135]: 11.c scrub starts
Nov 23 20:43:31 compute-1 ceph-mon[80135]: 11.c scrub ok
Nov 23 20:43:31 compute-1 ceph-mon[80135]: 6.2 deep-scrub starts
Nov 23 20:43:31 compute-1 ceph-mon[80135]: 6.2 deep-scrub ok
Nov 23 20:43:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:31 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009fd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:31 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e64 e64: 3 total, 3 up, 3 in
Nov 23 20:43:31 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.3( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=64 pruub=13.979523659s) [2] async=[2] r=-1 lpr=64 pi=[57,64)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.490051270s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:31 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.3( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=64 pruub=13.979463577s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.490051270s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:31 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.9( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=64 pruub=13.978591919s) [2] async=[2] r=-1 lpr=64 pi=[57,64)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.490005493s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:31 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.9( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=64 pruub=13.978328705s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.490005493s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:31 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=64 pruub=13.978281975s) [2] async=[2] r=-1 lpr=64 pi=[57,64)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.490081787s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:31 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.11( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=64 pruub=13.978183746s) [2] async=[2] r=-1 lpr=64 pi=[57,64)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.490036011s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:31 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=64 pruub=13.978253365s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.490081787s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:31 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.11( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=64 pruub=13.978151321s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.490036011s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:31 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.7( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=64 pruub=13.978464127s) [2] async=[2] r=-1 lpr=64 pi=[57,64)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.490554810s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:31 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.7( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=64 pruub=13.978434563s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.490554810s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:31 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] async=[1] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:31 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.2( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] async=[1] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:31 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] async=[1] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:31 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.12( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] async=[1] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:31 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=63/64 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] async=[1] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:31 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] async=[1] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:31 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=63/64 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] async=[1] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:31 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=63/64 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] async=[1] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:31 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009fd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:31 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:43:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:32 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500028a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:32 compute-1 ceph-mon[80135]: pgmap v67: 337 pgs: 1 active+recovering+remapped, 8 unknown, 6 active+recovery_wait+remapped, 9 active+remapped, 4 peering, 309 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 35/182 objects misplaced (19.231%); 954 B/s, 2 keys/s, 20 objects/s recovering
Nov 23 20:43:32 compute-1 ceph-mon[80135]: osdmap e64: 3 total, 3 up, 3 in
Nov 23 20:43:32 compute-1 ceph-mon[80135]: 12.9 scrub starts
Nov 23 20:43:32 compute-1 ceph-mon[80135]: 12.9 scrub ok
Nov 23 20:43:32 compute-1 ceph-mon[80135]: 11.b scrub starts
Nov 23 20:43:32 compute-1 ceph-mon[80135]: 11.b scrub ok
Nov 23 20:43:32 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e65 e65: 3 total, 3 up, 3 in
Nov 23 20:43:32 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=63/64 n=5 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.041407585s) [1] async=[1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 192.573394775s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:32 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=63/64 n=5 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.041334152s) [1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.573394775s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:32 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.040791512s) [1] async=[1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 192.573303223s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:32 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.040742874s) [1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.573303223s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:32 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.2( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.035518646s) [1] async=[1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 192.568283081s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:32 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.2( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.035380363s) [1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.568283081s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:32 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.040402412s) [1] async=[1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 192.573410034s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:32 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.040328979s) [1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.573410034s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:32 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=63/64 n=5 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.039586067s) [1] async=[1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 192.573455811s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:32 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=63/64 n=5 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.039546013s) [1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.573455811s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:32 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=63/64 n=5 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.039502144s) [1] async=[1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 192.573547363s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:32 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=63/64 n=5 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.039457321s) [1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.573547363s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:32 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.12( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.038745880s) [1] async=[1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 192.573318481s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:32 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.033739090s) [1] async=[1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 192.568267822s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:32 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.033395767s) [1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.568267822s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:32 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.12( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.038654327s) [1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.573318481s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:34 compute-1 ceph-mon[80135]: 11.e scrub starts
Nov 23 20:43:34 compute-1 ceph-mon[80135]: osdmap e65: 3 total, 3 up, 3 in
Nov 23 20:43:34 compute-1 ceph-mon[80135]: 11.e scrub ok
Nov 23 20:43:34 compute-1 ceph-mon[80135]: 11.9 scrub starts
Nov 23 20:43:34 compute-1 ceph-mon[80135]: 11.9 scrub ok
Nov 23 20:43:34 compute-1 ceph-mon[80135]: pgmap v70: 337 pgs: 1 active+recovering+remapped, 8 unknown, 6 active+recovery_wait+remapped, 9 active+remapped, 4 peering, 309 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 35/182 objects misplaced (19.231%); 954 B/s, 2 keys/s, 20 objects/s recovering
Nov 23 20:43:34 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:34 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:34 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:34 compute-1 sshd-session[86474]: Received disconnect from 34.91.0.68 port 44128:11: Bye Bye [preauth]
Nov 23 20:43:34 compute-1 sshd-session[86474]: Disconnected from authenticating user root 34.91.0.68 port 44128 [preauth]
Nov 23 20:43:35 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.15 scrub starts
Nov 23 20:43:35 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e66 e66: 3 total, 3 up, 3 in
Nov 23 20:43:35 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.15 scrub ok
Nov 23 20:43:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:35 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009fd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:35 compute-1 ceph-mon[80135]: 8.d deep-scrub starts
Nov 23 20:43:35 compute-1 ceph-mon[80135]: 8.d deep-scrub ok
Nov 23 20:43:35 compute-1 ceph-mon[80135]: 11.d scrub starts
Nov 23 20:43:35 compute-1 ceph-mon[80135]: 11.d scrub ok
Nov 23 20:43:35 compute-1 ceph-mon[80135]: 7.14 scrub starts
Nov 23 20:43:35 compute-1 ceph-mon[80135]: 7.14 scrub ok
Nov 23 20:43:35 compute-1 ceph-mon[80135]: 8.e scrub starts
Nov 23 20:43:35 compute-1 ceph-mon[80135]: 8.e scrub ok
Nov 23 20:43:35 compute-1 ceph-mon[80135]: 12.15 scrub starts
Nov 23 20:43:35 compute-1 ceph-mon[80135]: 12.15 scrub ok
Nov 23 20:43:35 compute-1 ceph-mon[80135]: osdmap e66: 3 total, 3 up, 3 in
Nov 23 20:43:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:35 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:35 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.19 deep-scrub starts
Nov 23 20:43:35 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.19 deep-scrub ok
Nov 23 20:43:36 compute-1 ceph-mon[80135]: pgmap v71: 337 pgs: 1 active+recovering+remapped, 8 unknown, 6 active+recovery_wait+remapped, 9 active+remapped, 4 peering, 309 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 35/182 objects misplaced (19.231%); 700 B/s, 1 keys/s, 14 objects/s recovering
Nov 23 20:43:36 compute-1 ceph-mon[80135]: 12.18 scrub starts
Nov 23 20:43:36 compute-1 ceph-mon[80135]: 12.18 scrub ok
Nov 23 20:43:36 compute-1 ceph-mon[80135]: 9.c scrub starts
Nov 23 20:43:36 compute-1 ceph-mon[80135]: 9.c scrub ok
Nov 23 20:43:36 compute-1 ceph-mon[80135]: 7.19 deep-scrub starts
Nov 23 20:43:36 compute-1 ceph-mon[80135]: 7.19 deep-scrub ok
Nov 23 20:43:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:36 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:36 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:43:37 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.d scrub starts
Nov 23 20:43:37 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.d scrub ok
Nov 23 20:43:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009fd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:38 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 23 20:43:38 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 23 20:43:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:38 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:39 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 23 20:43:39 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 23 20:43:39 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e67 e67: 3 total, 3 up, 3 in
Nov 23 20:43:39 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 67 pg[6.b( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=60/60 les/c/f=61/62/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:39 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=60/60 les/c/f=61/62/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:39 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 67 pg[6.7( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=60/60 les/c/f=61/62/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:39 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 67 pg[6.3( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:39 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:39 compute-1 ceph-mon[80135]: 8.c scrub starts
Nov 23 20:43:39 compute-1 ceph-mon[80135]: 8.c scrub ok
Nov 23 20:43:39 compute-1 ceph-mon[80135]: 9.0 scrub starts
Nov 23 20:43:39 compute-1 ceph-mon[80135]: 9.0 scrub ok
Nov 23 20:43:39 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 23 20:43:39 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 23 20:43:39 compute-1 ceph-mon[80135]: 7.d scrub starts
Nov 23 20:43:39 compute-1 ceph-mon[80135]: 7.d scrub ok
Nov 23 20:43:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:39 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:40 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Nov 23 20:43:40 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Nov 23 20:43:40 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e68 e68: 3 total, 3 up, 3 in
Nov 23 20:43:40 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 68 pg[6.3( v 49'39 lc 0'0 (0'0,49'39] local-lis/les=67/68 n=2 ec=53/18 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=49'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:40 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 68 pg[6.b( v 49'39 lc 0'0 (0'0,49'39] local-lis/les=67/68 n=1 ec=53/18 lis/c=60/60 les/c/f=61/62/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=49'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:40 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 68 pg[6.f( v 49'39 lc 48'1 (0'0,49'39] local-lis/les=67/68 n=3 ec=53/18 lis/c=60/60 les/c/f=61/62/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=49'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:40 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 68 pg[6.7( v 49'39 lc 48'21 (0'0,49'39] local-lis/les=67/68 n=1 ec=53/18 lis/c=60/60 les/c/f=61/62/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=49'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:40 compute-1 ceph-mon[80135]: pgmap v73: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 682 B/s wr, 89 op/s; 226 B/s, 12 objects/s recovering
Nov 23 20:43:40 compute-1 ceph-mon[80135]: 9.7 scrub starts
Nov 23 20:43:40 compute-1 ceph-mon[80135]: 9.7 scrub ok
Nov 23 20:43:40 compute-1 ceph-mon[80135]: 8.1 scrub starts
Nov 23 20:43:40 compute-1 ceph-mon[80135]: 8.1 scrub ok
Nov 23 20:43:40 compute-1 ceph-mon[80135]: 7.1 scrub starts
Nov 23 20:43:40 compute-1 ceph-mon[80135]: 7.1 scrub ok
Nov 23 20:43:40 compute-1 ceph-mon[80135]: 8.1c scrub starts
Nov 23 20:43:40 compute-1 ceph-mon[80135]: 8.1c scrub ok
Nov 23 20:43:40 compute-1 ceph-mon[80135]: 11.2 scrub starts
Nov 23 20:43:40 compute-1 ceph-mon[80135]: 11.2 scrub ok
Nov 23 20:43:40 compute-1 ceph-mon[80135]: pgmap v74: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 551 B/s wr, 72 op/s; 183 B/s, 10 objects/s recovering
Nov 23 20:43:40 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 23 20:43:40 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 23 20:43:40 compute-1 ceph-mon[80135]: 7.7 scrub starts
Nov 23 20:43:40 compute-1 ceph-mon[80135]: 7.7 scrub ok
Nov 23 20:43:40 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 23 20:43:40 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 23 20:43:40 compute-1 ceph-mon[80135]: osdmap e67: 3 total, 3 up, 3 in
Nov 23 20:43:40 compute-1 ceph-mon[80135]: 12.2 scrub starts
Nov 23 20:43:40 compute-1 ceph-mon[80135]: 12.2 scrub ok
Nov 23 20:43:40 compute-1 ceph-mon[80135]: 8.0 scrub starts
Nov 23 20:43:40 compute-1 ceph-mon[80135]: 8.0 scrub ok
Nov 23 20:43:40 compute-1 ceph-mon[80135]: 7.0 scrub starts
Nov 23 20:43:40 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 23 20:43:40 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 23 20:43:40 compute-1 ceph-mon[80135]: osdmap e68: 3 total, 3 up, 3 in
Nov 23 20:43:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:40 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009fd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:41 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.d scrub starts
Nov 23 20:43:41 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.d scrub ok
Nov 23 20:43:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:41 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009fd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:41 compute-1 ceph-mon[80135]: 7.0 scrub ok
Nov 23 20:43:41 compute-1 ceph-mon[80135]: 8.1f scrub starts
Nov 23 20:43:41 compute-1 ceph-mon[80135]: 8.1f scrub ok
Nov 23 20:43:41 compute-1 ceph-mon[80135]: 9.1 scrub starts
Nov 23 20:43:41 compute-1 ceph-mon[80135]: 9.1 scrub ok
Nov 23 20:43:41 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:41 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:41 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:41 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:41 compute-1 ceph-mon[80135]: 12.d scrub starts
Nov 23 20:43:41 compute-1 ceph-mon[80135]: 12.d scrub ok
Nov 23 20:43:41 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:41 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540013b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:41 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:43:42 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.5 scrub starts
Nov 23 20:43:42 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.5 scrub ok
Nov 23 20:43:42 compute-1 ceph-mon[80135]: pgmap v77: 337 pgs: 1 active+clean+scrubbing, 2 active+recovery_wait+degraded, 1 active+recovering, 333 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 0 B/s wr, 86 op/s; 3/226 objects degraded (1.327%); 2/226 objects misplaced (0.885%); 226 B/s, 12 objects/s recovering
Nov 23 20:43:42 compute-1 ceph-mon[80135]: Deploying daemon haproxy.rgw.default.compute-0.pteysg on compute-0
Nov 23 20:43:42 compute-1 ceph-mon[80135]: 8.11 scrub starts
Nov 23 20:43:42 compute-1 ceph-mon[80135]: 8.11 scrub ok
Nov 23 20:43:42 compute-1 ceph-mon[80135]: Health check failed: Degraded data redundancy: 3/226 objects degraded (1.327%), 2 pgs degraded (PG_DEGRADED)
Nov 23 20:43:42 compute-1 ceph-mon[80135]: 8.7 scrub starts
Nov 23 20:43:42 compute-1 ceph-mon[80135]: 8.7 scrub ok
Nov 23 20:43:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:42 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204342 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 20:43:43 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.0 scrub starts
Nov 23 20:43:43 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.0 scrub ok
Nov 23 20:43:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:43 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:43:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.002000048s ======
Nov 23 20:43:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:43:43.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Nov 23 20:43:43 compute-1 ceph-mon[80135]: 12.5 scrub starts
Nov 23 20:43:43 compute-1 ceph-mon[80135]: 12.5 scrub ok
Nov 23 20:43:43 compute-1 ceph-mon[80135]: 12.7 scrub starts
Nov 23 20:43:43 compute-1 ceph-mon[80135]: 12.7 scrub ok
Nov 23 20:43:43 compute-1 ceph-mon[80135]: 11.18 scrub starts
Nov 23 20:43:43 compute-1 ceph-mon[80135]: 11.18 scrub ok
Nov 23 20:43:43 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:43 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:43 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:43 compute-1 ceph-mon[80135]: 12.0 scrub starts
Nov 23 20:43:43 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:43 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009fd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:44 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.1f scrub starts
Nov 23 20:43:44 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.1f scrub ok
Nov 23 20:43:44 compute-1 ceph-mon[80135]: Deploying daemon haproxy.rgw.default.compute-2.tmivar on compute-2
Nov 23 20:43:44 compute-1 ceph-mon[80135]: pgmap v78: 337 pgs: 1 active+clean+scrubbing, 2 active+recovery_wait+degraded, 1 active+recovering, 333 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 66 op/s; 3/226 objects degraded (1.327%); 2/226 objects misplaced (0.885%); 174 B/s, 9 objects/s recovering
Nov 23 20:43:44 compute-1 ceph-mon[80135]: 12.0 scrub ok
Nov 23 20:43:44 compute-1 ceph-mon[80135]: 12.11 scrub starts
Nov 23 20:43:44 compute-1 ceph-mon[80135]: 12.11 scrub ok
Nov 23 20:43:44 compute-1 ceph-mon[80135]: 9.1a scrub starts
Nov 23 20:43:44 compute-1 ceph-mon[80135]: 9.1a scrub ok
Nov 23 20:43:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:44 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540013b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:44 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Nov 23 20:43:44 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:44.750365) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 20:43:44 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Nov 23 20:43:44 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930624750472, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7100, "num_deletes": 254, "total_data_size": 19730730, "memory_usage": 20664960, "flush_reason": "Manual Compaction"}
Nov 23 20:43:44 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Nov 23 20:43:45 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930625022075, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 12675099, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 7105, "table_properties": {"data_size": 12647598, "index_size": 17658, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8773, "raw_key_size": 84866, "raw_average_key_size": 24, "raw_value_size": 12579835, "raw_average_value_size": 3601, "num_data_blocks": 779, "num_entries": 3493, "num_filter_entries": 3493, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 1763930466, "file_creation_time": 1763930624, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Nov 23 20:43:45 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 271761 microseconds, and 27632 cpu microseconds.
Nov 23 20:43:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:45.022133) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 12675099 bytes OK
Nov 23 20:43:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:45.022151) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Nov 23 20:43:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:45.026799) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Nov 23 20:43:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:45.026814) EVENT_LOG_v1 {"time_micros": 1763930625026810, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 23 20:43:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:45.026829) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Nov 23 20:43:45 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 19692908, prev total WAL file size 19694885, number of live WAL files 2.
Nov 23 20:43:45 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 20:43:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:45.030748) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Nov 23 20:43:45 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Nov 23 20:43:45 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(12MB) 8(1648B)]
Nov 23 20:43:45 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930625030830, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 12676747, "oldest_snapshot_seqno": -1}
Nov 23 20:43:45 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Nov 23 20:43:45 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Nov 23 20:43:45 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 3243 keys, 12671646 bytes, temperature: kUnknown
Nov 23 20:43:45 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930625187185, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 12671646, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12644793, "index_size": 17655, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8133, "raw_key_size": 81455, "raw_average_key_size": 25, "raw_value_size": 12580115, "raw_average_value_size": 3879, "num_data_blocks": 778, "num_entries": 3243, "num_filter_entries": 3243, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763930625, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Nov 23 20:43:45 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 20:43:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:45.187384) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 12671646 bytes
Nov 23 20:43:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:45.195756) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 81.0 rd, 81.0 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(12.1, 0.0 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3498, records dropped: 255 output_compression: NoCompression
Nov 23 20:43:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:45.195773) EVENT_LOG_v1 {"time_micros": 1763930625195766, "job": 4, "event": "compaction_finished", "compaction_time_micros": 156409, "compaction_time_cpu_micros": 22598, "output_level": 6, "num_output_files": 1, "total_output_size": 12671646, "num_input_records": 3498, "num_output_records": 3243, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 20:43:45 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 20:43:45 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930625198245, "job": 4, "event": "table_file_deletion", "file_number": 14}
Nov 23 20:43:45 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 20:43:45 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930625198288, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 23 20:43:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:45.030667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:43:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:45 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:43:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:43:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:43:45.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:43:45 compute-1 ceph-mon[80135]: 12.1f scrub starts
Nov 23 20:43:45 compute-1 ceph-mon[80135]: 12.1f scrub ok
Nov 23 20:43:45 compute-1 ceph-mon[80135]: 8.3 scrub starts
Nov 23 20:43:45 compute-1 ceph-mon[80135]: 8.3 scrub ok
Nov 23 20:43:45 compute-1 ceph-mon[80135]: 8.1a scrub starts
Nov 23 20:43:45 compute-1 ceph-mon[80135]: 8.1a scrub ok
Nov 23 20:43:45 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:45 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:45 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:45 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:43:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:43:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:43:45.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:43:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:45 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:46 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.1b scrub starts
Nov 23 20:43:46 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.1b scrub ok
Nov 23 20:43:46 compute-1 ceph-mon[80135]: pgmap v79: 337 pgs: 1 active+clean+scrubbing, 2 active+recovery_wait+degraded, 1 active+recovering, 333 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 3/226 objects degraded (1.327%); 2/226 objects misplaced (0.885%); 0 B/s, 0 objects/s recovering
Nov 23 20:43:46 compute-1 ceph-mon[80135]: 7.17 scrub starts
Nov 23 20:43:46 compute-1 ceph-mon[80135]: 7.17 scrub ok
Nov 23 20:43:46 compute-1 ceph-mon[80135]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 23 20:43:46 compute-1 ceph-mon[80135]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 23 20:43:46 compute-1 ceph-mon[80135]: Deploying daemon keepalived.rgw.default.compute-0.xymmfk on compute-0
Nov 23 20:43:46 compute-1 ceph-mon[80135]: 8.6 deep-scrub starts
Nov 23 20:43:46 compute-1 ceph-mon[80135]: 8.6 deep-scrub ok
Nov 23 20:43:46 compute-1 ceph-mon[80135]: 9.1b scrub starts
Nov 23 20:43:46 compute-1 ceph-mon[80135]: 9.1b scrub ok
Nov 23 20:43:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009fd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:46 compute-1 sshd-session[86484]: Accepted publickey for zuul from 192.168.122.30 port 57150 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 20:43:46 compute-1 systemd-logind[793]: New session 36 of user zuul.
Nov 23 20:43:46 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:43:46 compute-1 systemd[1]: Started Session 36 of User zuul.
Nov 23 20:43:46 compute-1 sshd-session[86484]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:43:47 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.12 deep-scrub starts
Nov 23 20:43:47 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.12 deep-scrub ok
Nov 23 20:43:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:47 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954002670 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:43:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:43:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:43:47.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:43:47 compute-1 ceph-mon[80135]: 12.1b scrub starts
Nov 23 20:43:47 compute-1 ceph-mon[80135]: 12.1b scrub ok
Nov 23 20:43:47 compute-1 ceph-mon[80135]: 9.1d deep-scrub starts
Nov 23 20:43:47 compute-1 ceph-mon[80135]: 9.1d deep-scrub ok
Nov 23 20:43:47 compute-1 ceph-mon[80135]: 8.1e scrub starts
Nov 23 20:43:47 compute-1 ceph-mon[80135]: 8.1e scrub ok
Nov 23 20:43:47 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:47 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:47 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:47 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 23 20:43:47 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 23 20:43:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:43:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:43:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:43:47.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:43:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:47 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:47 compute-1 python3.9[86638]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:43:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e69 e69: 3 total, 3 up, 3 in
Nov 23 20:43:47 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 69 pg[10.14( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=69 pruub=13.493597984s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 206.428329468s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:47 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 69 pg[10.14( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=69 pruub=13.493473053s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.428329468s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:47 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 69 pg[10.c( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=69 pruub=13.497659683s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 206.432708740s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:47 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 69 pg[10.c( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=69 pruub=13.497632027s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.432708740s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:47 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 69 pg[10.4( v 59'998 (0'0,59'998] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=69 pruub=13.496999741s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=59'998 lcod 59'997 mlcod 59'997 active pruub 206.432723999s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:47 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 69 pg[10.4( v 59'998 (0'0,59'998] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=69 pruub=13.496963501s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=59'998 lcod 59'997 mlcod 0'0 unknown NOTIFY pruub 206.432723999s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:47 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 69 pg[10.1c( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=69 pruub=13.496662140s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 206.432937622s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:47 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 69 pg[10.1c( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=69 pruub=13.496646881s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.432937622s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:48 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.16 deep-scrub starts
Nov 23 20:43:48 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.16 deep-scrub ok
Nov 23 20:43:48 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:48 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:48 compute-1 ceph-mon[80135]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 23 20:43:48 compute-1 ceph-mon[80135]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 23 20:43:48 compute-1 ceph-mon[80135]: Deploying daemon keepalived.rgw.default.compute-2.zjypck on compute-2
Nov 23 20:43:48 compute-1 ceph-mon[80135]: pgmap v80: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 80 B/s, 1 keys/s, 0 objects/s recovering
Nov 23 20:43:48 compute-1 ceph-mon[80135]: 7.12 deep-scrub starts
Nov 23 20:43:48 compute-1 ceph-mon[80135]: 7.12 deep-scrub ok
Nov 23 20:43:48 compute-1 ceph-mon[80135]: 12.13 scrub starts
Nov 23 20:43:48 compute-1 ceph-mon[80135]: 12.13 scrub ok
Nov 23 20:43:48 compute-1 ceph-mon[80135]: 9.1f scrub starts
Nov 23 20:43:48 compute-1 ceph-mon[80135]: 9.1f scrub ok
Nov 23 20:43:48 compute-1 ceph-mon[80135]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 3/226 objects degraded (1.327%), 2 pgs degraded)
Nov 23 20:43:48 compute-1 ceph-mon[80135]: Cluster is now healthy
Nov 23 20:43:48 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 23 20:43:48 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 23 20:43:48 compute-1 ceph-mon[80135]: osdmap e69: 3 total, 3 up, 3 in
Nov 23 20:43:48 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e70 e70: 3 total, 3 up, 3 in
Nov 23 20:43:48 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 70 pg[10.14( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=70) [2]/[0] r=0 lpr=70 pi=[57,70)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:48 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 70 pg[10.14( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=70) [2]/[0] r=0 lpr=70 pi=[57,70)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:48 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 70 pg[10.c( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=70) [2]/[0] r=0 lpr=70 pi=[57,70)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:48 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 70 pg[10.4( v 59'998 (0'0,59'998] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=70) [2]/[0] r=0 lpr=70 pi=[57,70)/1 crt=59'998 lcod 59'997 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:48 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 70 pg[10.4( v 59'998 (0'0,59'998] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=70) [2]/[0] r=0 lpr=70 pi=[57,70)/1 crt=59'998 lcod 59'997 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:48 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 70 pg[10.1c( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=70) [2]/[0] r=0 lpr=70 pi=[57,70)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:48 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 70 pg[10.1c( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=70) [2]/[0] r=0 lpr=70 pi=[57,70)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:48 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 70 pg[10.c( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=70) [2]/[0] r=0 lpr=70 pi=[57,70)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:48 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 70 pg[6.d( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=60/60 les/c/f=61/61/0 sis=70) [0] r=0 lpr=70 pi=[60,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:48 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 70 pg[6.5( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=60/60 les/c/f=61/62/0 sis=70) [0] r=0 lpr=70 pi=[60,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:43:49 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.14 scrub starts
Nov 23 20:43:49 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.14 scrub ok
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.135625) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930629135721, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 434, "num_deletes": 251, "total_data_size": 372779, "memory_usage": 382584, "flush_reason": "Manual Compaction"}
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930629142776, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 244168, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7110, "largest_seqno": 7539, "table_properties": {"data_size": 241580, "index_size": 624, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6752, "raw_average_key_size": 19, "raw_value_size": 236093, "raw_average_value_size": 672, "num_data_blocks": 26, "num_entries": 351, "num_filter_entries": 351, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930625, "oldest_key_time": 1763930625, "file_creation_time": 1763930629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 7171 microseconds, and 3313 cpu microseconds.
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.142812) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 244168 bytes OK
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.142830) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.144260) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.144273) EVENT_LOG_v1 {"time_micros": 1763930629144269, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.144287) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 369934, prev total WAL file size 388166, number of live WAL files 2.
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.145235) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(238KB)], [15(12MB)]
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930629145313, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 12915814, "oldest_snapshot_seqno": -1}
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 3074 keys, 11707743 bytes, temperature: kUnknown
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930629272230, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 11707743, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11682846, "index_size": 16084, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7749, "raw_key_size": 79263, "raw_average_key_size": 25, "raw_value_size": 11621758, "raw_average_value_size": 3780, "num_data_blocks": 702, "num_entries": 3074, "num_filter_entries": 3074, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763930629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.272647) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 11707743 bytes
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.276647) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 101.7 rd, 92.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 12.1 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(100.8) write-amplify(47.9) OK, records in: 3594, records dropped: 520 output_compression: NoCompression
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.276685) EVENT_LOG_v1 {"time_micros": 1763930629276668, "job": 6, "event": "compaction_finished", "compaction_time_micros": 127015, "compaction_time_cpu_micros": 47136, "output_level": 6, "num_output_files": 1, "total_output_size": 11707743, "num_input_records": 3594, "num_output_records": 3074, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930629276950, "job": 6, "event": "table_file_deletion", "file_number": 17}
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930629280839, "job": 6, "event": "table_file_deletion", "file_number": 15}
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.145105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.280973) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.280980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.280982) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.280984) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:43:49 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.280986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:43:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009ff0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:43:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:43:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:43:49.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:43:49 compute-1 ceph-mon[80135]: 12.16 deep-scrub starts
Nov 23 20:43:49 compute-1 ceph-mon[80135]: 12.16 deep-scrub ok
Nov 23 20:43:49 compute-1 ceph-mon[80135]: 9.16 scrub starts
Nov 23 20:43:49 compute-1 ceph-mon[80135]: 9.16 scrub ok
Nov 23 20:43:49 compute-1 ceph-mon[80135]: 8.1d scrub starts
Nov 23 20:43:49 compute-1 ceph-mon[80135]: 8.1d scrub ok
Nov 23 20:43:49 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 23 20:43:49 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 23 20:43:49 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 23 20:43:49 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 23 20:43:49 compute-1 ceph-mon[80135]: osdmap e70: 3 total, 3 up, 3 in
Nov 23 20:43:49 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:49 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:49 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:49 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:43:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:43:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:43:49.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:43:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540027f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:49 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e71 e71: 3 total, 3 up, 3 in
Nov 23 20:43:49 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 71 pg[6.d( v 49'39 lc 48'13 (0'0,49'39] local-lis/les=70/71 n=1 ec=53/18 lis/c=60/60 les/c/f=61/61/0 sis=70) [0] r=0 lpr=70 pi=[60,70)/1 crt=49'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:49 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 71 pg[6.5( v 49'39 lc 48'11 (0'0,49'39] local-lis/les=70/71 n=2 ec=53/18 lis/c=60/60 les/c/f=61/62/0 sis=70) [0] r=0 lpr=70 pi=[60,70)/1 crt=49'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:49 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 71 pg[10.c( v 50'991 (0'0,50'991] local-lis/les=70/71 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[57,70)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:49 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 71 pg[10.14( v 50'991 (0'0,50'991] local-lis/les=70/71 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[57,70)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:49 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 71 pg[10.4( v 59'998 (0'0,59'998] local-lis/les=70/71 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[57,70)/1 crt=59'998 lcod 59'997 mlcod 0'0 active+remapped mbc={255={(0+1)=10}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:49 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 71 pg[10.1c( v 50'991 (0'0,50'991] local-lis/les=70/71 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[57,70)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:43:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:50 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:50 compute-1 ceph-mon[80135]: pgmap v82: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 74 B/s, 1 keys/s, 0 objects/s recovering
Nov 23 20:43:50 compute-1 ceph-mon[80135]: 12.14 scrub starts
Nov 23 20:43:50 compute-1 ceph-mon[80135]: 12.14 scrub ok
Nov 23 20:43:50 compute-1 ceph-mon[80135]: 8.5 scrub starts
Nov 23 20:43:50 compute-1 ceph-mon[80135]: 8.5 scrub ok
Nov 23 20:43:50 compute-1 ceph-mon[80135]: Deploying daemon prometheus.compute-0 on compute-0
Nov 23 20:43:50 compute-1 ceph-mon[80135]: 9.1c scrub starts
Nov 23 20:43:50 compute-1 ceph-mon[80135]: 9.1c scrub ok
Nov 23 20:43:50 compute-1 ceph-mon[80135]: osdmap e71: 3 total, 3 up, 3 in
Nov 23 20:43:50 compute-1 sudo[86851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peoasvaslnboouhkduqpdvwxsgtkkwjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930630.389974-58-42944361887675/AnsiballZ_command.py'
Nov 23 20:43:50 compute-1 sudo[86851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:43:50 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e72 e72: 3 total, 3 up, 3 in
Nov 23 20:43:50 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 72 pg[10.1c( v 50'991 (0'0,50'991] local-lis/les=70/71 n=5 ec=57/44 lis/c=70/57 les/c/f=71/58/0 sis=72 pruub=14.846215248s) [2] async=[2] r=-1 lpr=72 pi=[57,72)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 210.761810303s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:50 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 72 pg[10.4( v 71'1001 (0'0,71'1001] local-lis/les=70/71 n=6 ec=57/44 lis/c=70/57 les/c/f=71/58/0 sis=72 pruub=14.846132278s) [2] async=[2] r=-1 lpr=72 pi=[57,72)/1 crt=59'998 lcod 71'1000 mlcod 71'1000 active pruub 210.761749268s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:50 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 72 pg[10.c( v 50'991 (0'0,50'991] local-lis/les=70/71 n=6 ec=57/44 lis/c=70/57 les/c/f=71/58/0 sis=72 pruub=14.846039772s) [2] async=[2] r=-1 lpr=72 pi=[57,72)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 210.761657715s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:50 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 72 pg[10.1c( v 50'991 (0'0,50'991] local-lis/les=70/71 n=5 ec=57/44 lis/c=70/57 les/c/f=71/58/0 sis=72 pruub=14.846134186s) [2] r=-1 lpr=72 pi=[57,72)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 210.761810303s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:50 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 72 pg[10.c( v 50'991 (0'0,50'991] local-lis/les=70/71 n=6 ec=57/44 lis/c=70/57 les/c/f=71/58/0 sis=72 pruub=14.845974922s) [2] r=-1 lpr=72 pi=[57,72)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 210.761657715s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:50 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 72 pg[10.4( v 71'1001 (0'0,71'1001] local-lis/les=70/71 n=6 ec=57/44 lis/c=70/57 les/c/f=71/58/0 sis=72 pruub=14.846064568s) [2] r=-1 lpr=72 pi=[57,72)/1 crt=59'998 lcod 71'1000 mlcod 0'0 unknown NOTIFY pruub 210.761749268s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:50 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 72 pg[10.14( v 50'991 (0'0,50'991] local-lis/les=70/71 n=5 ec=57/44 lis/c=70/57 les/c/f=71/58/0 sis=72 pruub=14.845782280s) [2] async=[2] r=-1 lpr=72 pi=[57,72)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 210.761657715s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:43:50 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 72 pg[10.14( v 50'991 (0'0,50'991] local-lis/les=70/71 n=5 ec=57/44 lis/c=70/57 les/c/f=71/58/0 sis=72 pruub=14.845746994s) [2] r=-1 lpr=72 pi=[57,72)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 210.761657715s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:43:50 compute-1 python3.9[86853]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:43:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:43:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:43:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:43:51.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:43:51 compute-1 ceph-mon[80135]: 8.2 scrub starts
Nov 23 20:43:51 compute-1 ceph-mon[80135]: 8.2 scrub ok
Nov 23 20:43:51 compute-1 ceph-mon[80135]: 11.1f scrub starts
Nov 23 20:43:51 compute-1 ceph-mon[80135]: 11.1f scrub ok
Nov 23 20:43:51 compute-1 ceph-mon[80135]: osdmap e72: 3 total, 3 up, 3 in
Nov 23 20:43:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:43:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:43:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:43:51.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:43:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c00a010 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:51 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:43:51 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e73 e73: 3 total, 3 up, 3 in
Nov 23 20:43:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:52 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 20:43:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:52 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c00a010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:52 compute-1 ceph-mon[80135]: pgmap v86: 337 pgs: 4 remapped+peering, 4 active+remapped, 329 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 156 B/s, 7 objects/s recovering
Nov 23 20:43:52 compute-1 ceph-mon[80135]: 12.1d scrub starts
Nov 23 20:43:52 compute-1 ceph-mon[80135]: 12.1d scrub ok
Nov 23 20:43:52 compute-1 ceph-mon[80135]: 11.10 scrub starts
Nov 23 20:43:52 compute-1 ceph-mon[80135]: 11.10 scrub ok
Nov 23 20:43:52 compute-1 ceph-mon[80135]: osdmap e73: 3 total, 3 up, 3 in
Nov 23 20:43:52 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e74 e74: 3 total, 3 up, 3 in
Nov 23 20:43:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:53 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:43:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:43:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:43:53.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:43:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:43:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:43:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:43:53.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:43:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:53 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:53 compute-1 ceph-mon[80135]: 12.4 scrub starts
Nov 23 20:43:53 compute-1 ceph-mon[80135]: 12.4 scrub ok
Nov 23 20:43:53 compute-1 ceph-mon[80135]: 8.13 deep-scrub starts
Nov 23 20:43:53 compute-1 ceph-mon[80135]: 8.13 deep-scrub ok
Nov 23 20:43:53 compute-1 ceph-mon[80135]: pgmap v88: 337 pgs: 4 remapped+peering, 4 active+remapped, 329 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 156 B/s, 7 objects/s recovering
Nov 23 20:43:53 compute-1 ceph-mon[80135]: osdmap e74: 3 total, 3 up, 3 in
Nov 23 20:43:53 compute-1 ceph-mon[80135]: 9.13 scrub starts
Nov 23 20:43:53 compute-1 ceph-mon[80135]: 9.13 scrub ok
Nov 23 20:43:53 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:54 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954003110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:54 compute-1 ceph-mon[80135]: 11.11 scrub starts
Nov 23 20:43:54 compute-1 ceph-mon[80135]: 11.11 scrub ok
Nov 23 20:43:54 compute-1 ceph-mon[80135]: 12.1a deep-scrub starts
Nov 23 20:43:54 compute-1 ceph-mon[80135]: 12.1a deep-scrub ok
Nov 23 20:43:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c00a1b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:43:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:43:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:43:55.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:43:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 20:43:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:43:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:43:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:43:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:43:55.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:43:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:56 compute-1 ceph-mon[80135]: 11.6 deep-scrub starts
Nov 23 20:43:56 compute-1 ceph-mon[80135]: 11.6 deep-scrub ok
Nov 23 20:43:56 compute-1 ceph-mon[80135]: pgmap v90: 337 pgs: 4 remapped+peering, 4 active+remapped, 329 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 121 B/s, 5 objects/s recovering
Nov 23 20:43:56 compute-1 ceph-mon[80135]: 11.19 scrub starts
Nov 23 20:43:56 compute-1 ceph-mon[80135]: 11.19 scrub ok
Nov 23 20:43:56 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:56 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:56 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 20:43:56 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Nov 23 20:43:56 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.f scrub starts
Nov 23 20:43:56 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.f scrub ok
Nov 23 20:43:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:56 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:56 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:43:56 compute-1 sshd-session[82673]: Connection closed by 192.168.122.100 port 32952
Nov 23 20:43:56 compute-1 sshd-session[82654]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 23 20:43:56 compute-1 systemd[1]: session-34.scope: Deactivated successfully.
Nov 23 20:43:56 compute-1 systemd[1]: session-34.scope: Consumed 18.391s CPU time.
Nov 23 20:43:56 compute-1 systemd-logind[793]: Session 34 logged out. Waiting for processes to exit.
Nov 23 20:43:56 compute-1 systemd-logind[793]: Removed session 34.
Nov 23 20:43:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: ignoring --setuser ceph since I am not root
Nov 23 20:43:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: ignoring --setgroup ceph since I am not root
Nov 23 20:43:56 compute-1 ceph-mgr[80441]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 23 20:43:56 compute-1 ceph-mgr[80441]: pidfile_write: ignore empty --pid-file
Nov 23 20:43:56 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'alerts'
Nov 23 20:43:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:43:56.963+0000 7f9dbada4140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 20:43:56 compute-1 ceph-mgr[80441]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 20:43:56 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'balancer'
Nov 23 20:43:57 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.1 scrub starts
Nov 23 20:43:57 compute-1 ceph-mon[80135]: 9.4 deep-scrub starts
Nov 23 20:43:57 compute-1 ceph-mon[80135]: 9.4 deep-scrub ok
Nov 23 20:43:57 compute-1 ceph-mon[80135]: 12.f scrub starts
Nov 23 20:43:57 compute-1 ceph-mon[80135]: 12.f scrub ok
Nov 23 20:43:57 compute-1 ceph-mon[80135]: 9.5 scrub starts
Nov 23 20:43:57 compute-1 ceph-mon[80135]: 9.5 scrub ok
Nov 23 20:43:57 compute-1 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Nov 23 20:43:57 compute-1 ceph-mon[80135]: mgrmap e29: compute-0.oyehye(active, since 103s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 20:43:57 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.1 scrub ok
Nov 23 20:43:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:43:57.042+0000 7f9dbada4140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 20:43:57 compute-1 ceph-mgr[80441]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 20:43:57 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'cephadm'
Nov 23 20:43:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954003a30 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:43:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:43:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:43:57.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:43:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:43:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:43:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:43:57.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:43:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c00a1b0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:57 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'crash'
Nov 23 20:43:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:43:57.832+0000 7f9dbada4140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 20:43:57 compute-1 ceph-mgr[80441]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 20:43:57 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'dashboard'
Nov 23 20:43:58 compute-1 ceph-mon[80135]: 11.15 scrub starts
Nov 23 20:43:58 compute-1 ceph-mon[80135]: 11.15 scrub ok
Nov 23 20:43:58 compute-1 ceph-mon[80135]: 12.1 scrub starts
Nov 23 20:43:58 compute-1 ceph-mon[80135]: 12.1 scrub ok
Nov 23 20:43:58 compute-1 ceph-mon[80135]: 12.1e deep-scrub starts
Nov 23 20:43:58 compute-1 ceph-mon[80135]: 12.1e deep-scrub ok
Nov 23 20:43:58 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Nov 23 20:43:58 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Nov 23 20:43:58 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'devicehealth'
Nov 23 20:43:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:43:58.462+0000 7f9dbada4140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 20:43:58 compute-1 ceph-mgr[80441]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 20:43:58 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'diskprediction_local'
Nov 23 20:43:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:58 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 23 20:43:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 23 20:43:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]:   from numpy import show_config as show_numpy_config
Nov 23 20:43:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:43:58.632+0000 7f9dbada4140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 20:43:58 compute-1 ceph-mgr[80441]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 20:43:58 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'influx'
Nov 23 20:43:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:43:58.705+0000 7f9dbada4140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 20:43:58 compute-1 ceph-mgr[80441]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 20:43:58 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'insights'
Nov 23 20:43:58 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'iostat'
Nov 23 20:43:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:58 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 20:43:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:43:58.847+0000 7f9dbada4140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 20:43:58 compute-1 ceph-mgr[80441]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 20:43:58 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'k8sevents'
Nov 23 20:43:58 compute-1 sshd-session[86923]: Invalid user solv from 161.35.133.66 port 56442
Nov 23 20:43:58 compute-1 sshd-session[86923]: Connection closed by invalid user solv 161.35.133.66 port 56442 [preauth]
Nov 23 20:43:59 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Nov 23 20:43:59 compute-1 sudo[86851]: pam_unix(sudo:session): session closed for user root
Nov 23 20:43:59 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Nov 23 20:43:59 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'localpool'
Nov 23 20:43:59 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'mds_autoscaler'
Nov 23 20:43:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:59 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:43:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:43:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:43:59.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:43:59 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'mirroring'
Nov 23 20:43:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:43:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 23 20:43:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:43:59.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 23 20:43:59 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'nfs'
Nov 23 20:43:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:59 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954003a30 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:43:59 compute-1 ceph-mon[80135]: 9.19 scrub starts
Nov 23 20:43:59 compute-1 ceph-mon[80135]: 9.19 scrub ok
Nov 23 20:43:59 compute-1 ceph-mon[80135]: 8.14 scrub starts
Nov 23 20:43:59 compute-1 ceph-mon[80135]: 8.14 scrub ok
Nov 23 20:43:59 compute-1 ceph-mon[80135]: 8.9 scrub starts
Nov 23 20:43:59 compute-1 ceph-mon[80135]: 8.9 scrub ok
Nov 23 20:43:59 compute-1 sshd-session[86918]: Invalid user yhli from 43.225.142.116 port 40204
Nov 23 20:43:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:43:59.854+0000 7f9dbada4140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 20:43:59 compute-1 ceph-mgr[80441]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 20:43:59 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'orchestrator'
Nov 23 20:44:00 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.17 deep-scrub starts
Nov 23 20:44:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:00.061+0000 7f9dbada4140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 20:44:00 compute-1 ceph-mgr[80441]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 20:44:00 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'osd_perf_query'
Nov 23 20:44:00 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.17 deep-scrub ok
Nov 23 20:44:00 compute-1 sshd-session[86918]: Received disconnect from 43.225.142.116 port 40204:11: Bye Bye [preauth]
Nov 23 20:44:00 compute-1 sshd-session[86918]: Disconnected from invalid user yhli 43.225.142.116 port 40204 [preauth]
Nov 23 20:44:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:00.134+0000 7f9dbada4140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 20:44:00 compute-1 ceph-mgr[80441]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 20:44:00 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'osd_support'
Nov 23 20:44:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:00.203+0000 7f9dbada4140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 20:44:00 compute-1 ceph-mgr[80441]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 20:44:00 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'pg_autoscaler'
Nov 23 20:44:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:00.287+0000 7f9dbada4140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 20:44:00 compute-1 ceph-mgr[80441]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 20:44:00 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'progress'
Nov 23 20:44:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:00.357+0000 7f9dbada4140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 20:44:00 compute-1 ceph-mgr[80441]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 20:44:00 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'prometheus'
Nov 23 20:44:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:00 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c00a1b0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:00.705+0000 7f9dbada4140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 20:44:00 compute-1 ceph-mgr[80441]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 20:44:00 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'rbd_support'
Nov 23 20:44:00 compute-1 ceph-mon[80135]: 9.1e scrub starts
Nov 23 20:44:00 compute-1 ceph-mon[80135]: 9.1e scrub ok
Nov 23 20:44:00 compute-1 ceph-mon[80135]: 9.15 scrub starts
Nov 23 20:44:00 compute-1 ceph-mon[80135]: 9.15 scrub ok
Nov 23 20:44:00 compute-1 ceph-mon[80135]: 12.17 scrub starts
Nov 23 20:44:00 compute-1 ceph-mon[80135]: 12.17 scrub ok
Nov 23 20:44:00 compute-1 ceph-mon[80135]: 7.18 scrub starts
Nov 23 20:44:00 compute-1 ceph-mon[80135]: 7.18 scrub ok
Nov 23 20:44:00 compute-1 ceph-mon[80135]: 8.17 deep-scrub starts
Nov 23 20:44:00 compute-1 ceph-mon[80135]: 8.17 deep-scrub ok
Nov 23 20:44:00 compute-1 ceph-mon[80135]: 11.a scrub starts
Nov 23 20:44:00 compute-1 ceph-mon[80135]: 11.a scrub ok
Nov 23 20:44:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:00.800+0000 7f9dbada4140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 20:44:00 compute-1 ceph-mgr[80441]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 20:44:00 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'restful'
Nov 23 20:44:01 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'rgw'
Nov 23 20:44:01 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 23 20:44:01 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 23 20:44:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:01.238+0000 7f9dbada4140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 20:44:01 compute-1 ceph-mgr[80441]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 20:44:01 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'rook'
Nov 23 20:44:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:01 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:44:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:01.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:44:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:01.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:01 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:01 compute-1 sshd-session[86487]: Connection closed by 192.168.122.30 port 57150
Nov 23 20:44:01 compute-1 sshd-session[86484]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:44:01 compute-1 systemd[1]: session-36.scope: Deactivated successfully.
Nov 23 20:44:01 compute-1 systemd[1]: session-36.scope: Consumed 8.014s CPU time.
Nov 23 20:44:01 compute-1 systemd-logind[793]: Session 36 logged out. Waiting for processes to exit.
Nov 23 20:44:01 compute-1 systemd-logind[793]: Removed session 36.
Nov 23 20:44:01 compute-1 ceph-mon[80135]: 7.1e scrub starts
Nov 23 20:44:01 compute-1 ceph-mon[80135]: 7.1e scrub ok
Nov 23 20:44:01 compute-1 ceph-mon[80135]: 11.12 scrub starts
Nov 23 20:44:01 compute-1 ceph-mon[80135]: 11.12 scrub ok
Nov 23 20:44:01 compute-1 ceph-mon[80135]: 9.17 scrub starts
Nov 23 20:44:01 compute-1 ceph-mon[80135]: 9.17 scrub ok
Nov 23 20:44:01 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:44:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:01.805+0000 7f9dbada4140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 20:44:01 compute-1 ceph-mgr[80441]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 20:44:01 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'selftest'
Nov 23 20:44:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:01.877+0000 7f9dbada4140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 20:44:01 compute-1 ceph-mgr[80441]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 20:44:01 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'snap_schedule'
Nov 23 20:44:01 compute-1 ceph-mgr[80441]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 20:44:01 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'stats'
Nov 23 20:44:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:01.963+0000 7f9dbada4140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 20:44:02 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.10 deep-scrub starts
Nov 23 20:44:02 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.10 deep-scrub ok
Nov 23 20:44:02 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'status'
Nov 23 20:44:02 compute-1 sshd-session[86951]: Invalid user ether from 92.118.39.92 port 48950
Nov 23 20:44:02 compute-1 ceph-mgr[80441]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 20:44:02 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'telegraf'
Nov 23 20:44:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:02.118+0000 7f9dbada4140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 20:44:02 compute-1 sshd-session[86951]: Connection closed by invalid user ether 92.118.39.92 port 48950 [preauth]
Nov 23 20:44:02 compute-1 ceph-mgr[80441]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 20:44:02 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'telemetry'
Nov 23 20:44:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:02.190+0000 7f9dbada4140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 20:44:02 compute-1 ceph-mgr[80441]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 20:44:02 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'test_orchestrator'
Nov 23 20:44:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:02.353+0000 7f9dbada4140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 20:44:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:02 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954003a30 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:02 compute-1 ceph-mgr[80441]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 20:44:02 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'volumes'
Nov 23 20:44:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:02.577+0000 7f9dbada4140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 20:44:02 compute-1 ceph-mon[80135]: 12.19 scrub starts
Nov 23 20:44:02 compute-1 ceph-mon[80135]: 12.19 scrub ok
Nov 23 20:44:02 compute-1 ceph-mon[80135]: 9.10 deep-scrub starts
Nov 23 20:44:02 compute-1 ceph-mon[80135]: 9.10 deep-scrub ok
Nov 23 20:44:02 compute-1 ceph-mon[80135]: 8.f scrub starts
Nov 23 20:44:02 compute-1 ceph-mon[80135]: 8.f scrub ok
Nov 23 20:44:02 compute-1 ceph-mon[80135]: Standby manager daemon compute-2.jtkauz restarted
Nov 23 20:44:02 compute-1 ceph-mon[80135]: Standby manager daemon compute-2.jtkauz started
Nov 23 20:44:02 compute-1 ceph-mgr[80441]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 20:44:02 compute-1 ceph-mgr[80441]: mgr[py] Loading python module 'zabbix'
Nov 23 20:44:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:02.855+0000 7f9dbada4140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 20:44:02 compute-1 ceph-mgr[80441]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 20:44:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:02.927+0000 7f9dbada4140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 20:44:02 compute-1 ceph-mgr[80441]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 20:44:02 compute-1 ceph-mgr[80441]: mgr load Constructed class from module: dashboard
Nov 23 20:44:02 compute-1 ceph-mgr[80441]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 20:44:02 compute-1 ceph-mgr[80441]: mgr load Constructed class from module: prometheus
Nov 23 20:44:02 compute-1 ceph-mgr[80441]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Nov 23 20:44:02 compute-1 ceph-mgr[80441]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 23 20:44:02 compute-1 ceph-mgr[80441]: [dashboard INFO root] Starting engine...
Nov 23 20:44:02 compute-1 ceph-mgr[80441]: ms_deliver_dispatch: unhandled message 0x55875b587860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Nov 23 20:44:02 compute-1 ceph-mgr[80441]: [prometheus INFO root] server_addr: :: server_port: 9283
Nov 23 20:44:02 compute-1 ceph-mgr[80441]: [prometheus INFO root] Starting engine...
Nov 23 20:44:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: [23/Nov/2025:20:44:02] ENGINE Bus STARTING
Nov 23 20:44:02 compute-1 ceph-mgr[80441]: [prometheus INFO cherrypy.error] [23/Nov/2025:20:44:02] ENGINE Bus STARTING
Nov 23 20:44:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: CherryPy Checker:
Nov 23 20:44:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: The Application mounted at '' has an empty config.
Nov 23 20:44:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 
Nov 23 20:44:02 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Nov 23 20:44:02 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Nov 23 20:44:03 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e75 e75: 3 total, 3 up, 3 in
Nov 23 20:44:03 compute-1 ceph-mgr[80441]: [dashboard INFO root] Engine started...
Nov 23 20:44:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: [23/Nov/2025:20:44:03] ENGINE Serving on http://:::9283
Nov 23 20:44:03 compute-1 ceph-mgr[80441]: [prometheus INFO cherrypy.error] [23/Nov/2025:20:44:03] ENGINE Serving on http://:::9283
Nov 23 20:44:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: [23/Nov/2025:20:44:03] ENGINE Bus STARTED
Nov 23 20:44:03 compute-1 ceph-mgr[80441]: [prometheus INFO cherrypy.error] [23/Nov/2025:20:44:03] ENGINE Bus STARTED
Nov 23 20:44:03 compute-1 ceph-mgr[80441]: [prometheus INFO root] Engine started.
Nov 23 20:44:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:03 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c00a1b0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:03.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:03 compute-1 sshd-session[86977]: Accepted publickey for ceph-admin from 192.168.122.100 port 44070 ssh2: RSA SHA256:ArvGVmp8+2uP4nDr4YVQ5KKtNyaQTjQGpGKaK12sPrI
Nov 23 20:44:03 compute-1 systemd-logind[793]: New session 37 of user ceph-admin.
Nov 23 20:44:03 compute-1 systemd[1]: Started Session 37 of User ceph-admin.
Nov 23 20:44:03 compute-1 sshd-session[86977]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 23 20:44:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:03.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:03 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:03 compute-1 sudo[86982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:44:03 compute-1 sudo[86982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:03 compute-1 sudo[86982]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:03 compute-1 sudo[87007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 23 20:44:03 compute-1 sudo[87007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:03 compute-1 ceph-mon[80135]: 7.9 deep-scrub starts
Nov 23 20:44:03 compute-1 ceph-mon[80135]: 7.9 deep-scrub ok
Nov 23 20:44:03 compute-1 ceph-mon[80135]: mgrmap e30: compute-0.oyehye(active, since 109s), standbys: compute-1.kgyerp, compute-2.jtkauz
Nov 23 20:44:03 compute-1 ceph-mon[80135]: Standby manager daemon compute-1.kgyerp restarted
Nov 23 20:44:03 compute-1 ceph-mon[80135]: Standby manager daemon compute-1.kgyerp started
Nov 23 20:44:03 compute-1 ceph-mon[80135]: 11.1 scrub starts
Nov 23 20:44:03 compute-1 ceph-mon[80135]: 11.1 scrub ok
Nov 23 20:44:03 compute-1 ceph-mon[80135]: Active manager daemon compute-0.oyehye restarted
Nov 23 20:44:03 compute-1 ceph-mon[80135]: Activating manager daemon compute-0.oyehye
Nov 23 20:44:03 compute-1 ceph-mon[80135]: osdmap e75: 3 total, 3 up, 3 in
Nov 23 20:44:03 compute-1 ceph-mon[80135]: mgrmap e31: compute-0.oyehye(active, starting, since 0.0253026s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 20:44:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 23 20:44:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 23 20:44:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 23 20:44:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.jcbopz"}]: dispatch
Nov 23 20:44:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.utubtn"}]: dispatch
Nov 23 20:44:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.gmfhnm"}]: dispatch
Nov 23 20:44:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr metadata", "who": "compute-0.oyehye", "id": "compute-0.oyehye"}]: dispatch
Nov 23 20:44:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr metadata", "who": "compute-2.jtkauz", "id": "compute-2.jtkauz"}]: dispatch
Nov 23 20:44:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr metadata", "who": "compute-1.kgyerp", "id": "compute-1.kgyerp"}]: dispatch
Nov 23 20:44:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 23 20:44:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 23 20:44:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 23 20:44:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 23 20:44:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 23 20:44:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 23 20:44:03 compute-1 ceph-mon[80135]: Manager daemon compute-0.oyehye is now available
Nov 23 20:44:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:44:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/mirror_snapshot_schedule"}]: dispatch
Nov 23 20:44:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/trash_purge_schedule"}]: dispatch
Nov 23 20:44:03 compute-1 ceph-mon[80135]: 9.8 scrub starts
Nov 23 20:44:03 compute-1 ceph-mon[80135]: 9.8 scrub ok
Nov 23 20:44:03 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.f scrub starts
Nov 23 20:44:03 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.f scrub ok
Nov 23 20:44:04 compute-1 podman[87106]: 2025-11-23 20:44:04.228334327 +0000 UTC m=+0.075694631 container exec e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 23 20:44:04 compute-1 podman[87106]: 2025-11-23 20:44:04.324175877 +0000 UTC m=+0.171536161 container exec_died e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 23 20:44:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:04 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204404 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 20:44:04 compute-1 podman[87225]: 2025-11-23 20:44:04.733344879 +0000 UTC m=+0.051000702 container exec 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 20:44:04 compute-1 podman[87225]: 2025-11-23 20:44:04.768385085 +0000 UTC m=+0.086040898 container exec_died 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 20:44:04 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.d scrub starts
Nov 23 20:44:04 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.d scrub ok
Nov 23 20:44:05 compute-1 podman[87318]: 2025-11-23 20:44:05.078370316 +0000 UTC m=+0.055324538 container exec 466d10d0fad1c5a4f86b3f6ff6a62c2f5b4e27c7206b481850c43d696b989539 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 23 20:44:05 compute-1 ceph-mon[80135]: 12.6 scrub starts
Nov 23 20:44:05 compute-1 ceph-mon[80135]: 12.6 scrub ok
Nov 23 20:44:05 compute-1 ceph-mon[80135]: 11.f scrub starts
Nov 23 20:44:05 compute-1 ceph-mon[80135]: 11.f scrub ok
Nov 23 20:44:05 compute-1 ceph-mon[80135]: mgrmap e32: compute-0.oyehye(active, since 1.07219s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 20:44:05 compute-1 ceph-mon[80135]: 11.8 deep-scrub starts
Nov 23 20:44:05 compute-1 ceph-mon[80135]: 11.8 deep-scrub ok
Nov 23 20:44:05 compute-1 ceph-mon[80135]: [23/Nov/2025:20:44:04] ENGINE Bus STARTING
Nov 23 20:44:05 compute-1 ceph-mon[80135]: [23/Nov/2025:20:44:04] ENGINE Serving on http://192.168.122.100:8765
Nov 23 20:44:05 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 23 20:44:05 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 23 20:44:05 compute-1 podman[87339]: 2025-11-23 20:44:05.14410609 +0000 UTC m=+0.048833258 container exec_died 466d10d0fad1c5a4f86b3f6ff6a62c2f5b4e27c7206b481850c43d696b989539 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Nov 23 20:44:05 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e76 e76: 3 total, 3 up, 3 in
Nov 23 20:44:05 compute-1 podman[87318]: 2025-11-23 20:44:05.163640823 +0000 UTC m=+0.140595035 container exec_died 466d10d0fad1c5a4f86b3f6ff6a62c2f5b4e27c7206b481850c43d696b989539 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 23 20:44:05 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 76 pg[10.1e( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=76) [0] r=0 lpr=76 pi=[65,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:05 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 76 pg[10.6( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=76) [0] r=0 lpr=76 pi=[65,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:05 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 76 pg[10.e( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=76) [0] r=0 lpr=76 pi=[65,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:05 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 76 pg[10.16( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=76) [0] r=0 lpr=76 pi=[65,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:05 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 76 pg[6.e( v 49'39 (0'0,49'39] local-lis/les=62/63 n=1 ec=53/18 lis/c=62/62 les/c/f=63/63/0 sis=76 pruub=13.330209732s) [1] r=-1 lpr=76 pi=[62,76)/1 crt=49'39 mlcod 49'39 active pruub 223.509872437s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:05 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 76 pg[6.e( v 49'39 (0'0,49'39] local-lis/les=62/63 n=1 ec=53/18 lis/c=62/62 les/c/f=63/63/0 sis=76 pruub=13.330179214s) [1] r=-1 lpr=76 pi=[62,76)/1 crt=49'39 mlcod 0'0 unknown NOTIFY pruub 223.509872437s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:05 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 76 pg[6.6( v 49'39 (0'0,49'39] local-lis/les=62/63 n=2 ec=53/18 lis/c=62/62 les/c/f=63/63/0 sis=76 pruub=13.329833984s) [1] r=-1 lpr=76 pi=[62,76)/1 crt=49'39 mlcod 49'39 active pruub 223.509826660s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:05 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 76 pg[6.6( v 49'39 (0'0,49'39] local-lis/les=62/63 n=2 ec=53/18 lis/c=62/62 les/c/f=63/63/0 sis=76 pruub=13.329812050s) [1] r=-1 lpr=76 pi=[62,76)/1 crt=49'39 mlcod 0'0 unknown NOTIFY pruub 223.509826660s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954003bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:05.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:05 compute-1 podman[87384]: 2025-11-23 20:44:05.369349487 +0000 UTC m=+0.044553912 container exec 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei)
Nov 23 20:44:05 compute-1 podman[87384]: 2025-11-23 20:44:05.380075023 +0000 UTC m=+0.055279448 container exec_died 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei)
Nov 23 20:44:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:05.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:05 compute-1 podman[87450]: 2025-11-23 20:44:05.607113684 +0000 UTC m=+0.056600160 container exec 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vcs-type=git, version=2.2.4, com.redhat.component=keepalived-container, distribution-scope=public, vendor=Red Hat, Inc., name=keepalived, release=1793, architecture=x86_64, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 20:44:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954003bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:05 compute-1 podman[87450]: 2025-11-23 20:44:05.64824432 +0000 UTC m=+0.097730766 container exec_died 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, architecture=x86_64, description=keepalived for Ceph, distribution-scope=public, vendor=Red Hat, Inc., release=1793, vcs-type=git, version=2.2.4, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20)
Nov 23 20:44:05 compute-1 sudo[87007]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:05 compute-1 sudo[87484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:44:05 compute-1 sudo[87484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:05 compute-1 sudo[87484]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:05 compute-1 sudo[87509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 20:44:05 compute-1 sudo[87509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:05 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.8 deep-scrub starts
Nov 23 20:44:05 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.8 deep-scrub ok
Nov 23 20:44:06 compute-1 ceph-mon[80135]: 12.1c deep-scrub starts
Nov 23 20:44:06 compute-1 ceph-mon[80135]: 12.1c deep-scrub ok
Nov 23 20:44:06 compute-1 ceph-mon[80135]: [23/Nov/2025:20:44:04] ENGINE Serving on https://192.168.122.100:7150
Nov 23 20:44:06 compute-1 ceph-mon[80135]: [23/Nov/2025:20:44:04] ENGINE Bus STARTED
Nov 23 20:44:06 compute-1 ceph-mon[80135]: [23/Nov/2025:20:44:04] ENGINE Client ('192.168.122.100', 33786) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 23 20:44:06 compute-1 ceph-mon[80135]: 9.d scrub starts
Nov 23 20:44:06 compute-1 ceph-mon[80135]: 9.d scrub ok
Nov 23 20:44:06 compute-1 ceph-mon[80135]: pgmap v4: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:44:06 compute-1 ceph-mon[80135]: mgrmap e33: compute-0.oyehye(active, since 2s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 20:44:06 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 23 20:44:06 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 23 20:44:06 compute-1 ceph-mon[80135]: osdmap e76: 3 total, 3 up, 3 in
Nov 23 20:44:06 compute-1 ceph-mon[80135]: 8.b scrub starts
Nov 23 20:44:06 compute-1 ceph-mon[80135]: 8.b scrub ok
Nov 23 20:44:06 compute-1 ceph-mon[80135]: 7.b scrub starts
Nov 23 20:44:06 compute-1 ceph-mon[80135]: 7.b scrub ok
Nov 23 20:44:06 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:06 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:06 compute-1 ceph-mon[80135]: 8.8 deep-scrub starts
Nov 23 20:44:06 compute-1 ceph-mon[80135]: 8.8 deep-scrub ok
Nov 23 20:44:06 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e77 e77: 3 total, 3 up, 3 in
Nov 23 20:44:06 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 77 pg[10.16( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] r=-1 lpr=77 pi=[65,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:06 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 77 pg[10.16( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] r=-1 lpr=77 pi=[65,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:06 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 77 pg[10.e( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] r=-1 lpr=77 pi=[65,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:06 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 77 pg[10.e( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] r=-1 lpr=77 pi=[65,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:06 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 77 pg[10.6( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] r=-1 lpr=77 pi=[65,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:06 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 77 pg[10.6( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] r=-1 lpr=77 pi=[65,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:06 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 77 pg[10.1e( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] r=-1 lpr=77 pi=[65,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:06 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 77 pg[10.1e( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] r=-1 lpr=77 pi=[65,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:06 compute-1 sudo[87509]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:06 compute-1 sudo[87566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:44:06 compute-1 sudo[87566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:06 compute-1 sudo[87566]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:06 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:06 compute-1 sudo[87591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Nov 23 20:44:06 compute-1 sudo[87591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:06 compute-1 sudo[87591]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:06 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:44:06 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Nov 23 20:44:06 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Nov 23 20:44:07 compute-1 ceph-mon[80135]: osdmap e77: 3 total, 3 up, 3 in
Nov 23 20:44:07 compute-1 ceph-mon[80135]: 7.5 scrub starts
Nov 23 20:44:07 compute-1 ceph-mon[80135]: 7.5 scrub ok
Nov 23 20:44:07 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:07 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:07 compute-1 ceph-mon[80135]: 7.f scrub starts
Nov 23 20:44:07 compute-1 ceph-mon[80135]: 7.f scrub ok
Nov 23 20:44:07 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:07 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:07 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 23 20:44:07 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Nov 23 20:44:07 compute-1 ceph-mon[80135]: 11.4 scrub starts
Nov 23 20:44:07 compute-1 ceph-mon[80135]: 11.4 scrub ok
Nov 23 20:44:07 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:07 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:07 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 23 20:44:07 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 23 20:44:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e78 e78: 3 total, 3 up, 3 in
Nov 23 20:44:07 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 78 pg[10.f( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=78) [0] r=0 lpr=78 pi=[63,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:07 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 78 pg[10.1f( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=78) [0] r=0 lpr=78 pi=[63,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:07 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 78 pg[10.7( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=64/64 les/c/f=65/65/0 sis=78) [0] r=0 lpr=78 pi=[64,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:07 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 78 pg[10.17( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=78) [0] r=0 lpr=78 pi=[63,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:07.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:07.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6950001080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:07 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Nov 23 20:44:07 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Nov 23 20:44:08 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e79 e79: 3 total, 3 up, 3 in
Nov 23 20:44:08 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.17( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=79) [0]/[2] r=-1 lpr=79 pi=[63,79)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:08 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.17( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=79) [0]/[2] r=-1 lpr=79 pi=[63,79)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:08 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79) [0] r=0 lpr=79 pi=[65,79)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:08 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79) [0] r=0 lpr=79 pi=[65,79)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:08 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.f( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=79) [0]/[2] r=-1 lpr=79 pi=[63,79)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:08 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.f( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=79) [0]/[2] r=-1 lpr=79 pi=[63,79)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:08 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79) [0] r=0 lpr=79 pi=[65,79)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:08 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79) [0] r=0 lpr=79 pi=[65,79)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:08 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79) [0] r=0 lpr=79 pi=[65,79)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:08 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79) [0] r=0 lpr=79 pi=[65,79)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:08 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.1f( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=79) [0]/[2] r=-1 lpr=79 pi=[63,79)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:08 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79) [0] r=0 lpr=79 pi=[65,79)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:08 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.1f( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=79) [0]/[2] r=-1 lpr=79 pi=[63,79)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:08 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79) [0] r=0 lpr=79 pi=[65,79)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:08 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.7( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=64/64 les/c/f=65/65/0 sis=79) [0]/[2] r=-1 lpr=79 pi=[64,79)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:08 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.7( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=64/64 les/c/f=65/65/0 sis=79) [0]/[2] r=-1 lpr=79 pi=[64,79)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:08 compute-1 ceph-mon[80135]: pgmap v7: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:44:08 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 23 20:44:08 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 23 20:44:08 compute-1 ceph-mon[80135]: osdmap e78: 3 total, 3 up, 3 in
Nov 23 20:44:08 compute-1 ceph-mon[80135]: 8.15 scrub starts
Nov 23 20:44:08 compute-1 ceph-mon[80135]: 8.15 scrub ok
Nov 23 20:44:08 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:08 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:08 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 23 20:44:08 compute-1 ceph-mon[80135]: 7.8 scrub starts
Nov 23 20:44:08 compute-1 ceph-mon[80135]: 7.8 scrub ok
Nov 23 20:44:08 compute-1 ceph-mon[80135]: 11.7 scrub starts
Nov 23 20:44:08 compute-1 ceph-mon[80135]: 11.7 scrub ok
Nov 23 20:44:08 compute-1 ceph-mon[80135]: mgrmap e34: compute-0.oyehye(active, since 4s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 20:44:08 compute-1 sudo[87637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 23 20:44:08 compute-1 sudo[87637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:08 compute-1 sudo[87637]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:08 compute-1 sudo[87662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph
Nov 23 20:44:08 compute-1 sudo[87662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:08 compute-1 sudo[87662]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:08 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954003bf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:08 compute-1 sudo[87687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.conf.new
Nov 23 20:44:08 compute-1 sudo[87687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:08 compute-1 sudo[87687]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:08 compute-1 sudo[87712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:44:08 compute-1 sudo[87712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:08 compute-1 sudo[87712]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:08 compute-1 sudo[87737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.conf.new
Nov 23 20:44:08 compute-1 sudo[87737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:08 compute-1 sudo[87737]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:08 compute-1 sudo[87785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.conf.new
Nov 23 20:44:08 compute-1 sudo[87785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:08 compute-1 sudo[87785]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:08 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.e scrub starts
Nov 23 20:44:08 compute-1 sudo[87810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.conf.new
Nov 23 20:44:08 compute-1 sudo[87810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:08 compute-1 sudo[87810]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:08 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.e scrub ok
Nov 23 20:44:08 compute-1 sudo[87835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 23 20:44:08 compute-1 sudo[87835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:08 compute-1 sudo[87835]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:08 compute-1 sudo[87860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config
Nov 23 20:44:08 compute-1 sudo[87860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:08 compute-1 sudo[87860]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:09 compute-1 sudo[87885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config
Nov 23 20:44:09 compute-1 sudo[87885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:09 compute-1 sudo[87885]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:09 compute-1 sudo[87910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf.new
Nov 23 20:44:09 compute-1 sudo[87910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:09 compute-1 sudo[87910]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:09 compute-1 sudo[87935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:44:09 compute-1 sudo[87935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:09 compute-1 sudo[87935]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:09 compute-1 sudo[87960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf.new
Nov 23 20:44:09 compute-1 sudo[87960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:09 compute-1 sudo[87960]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:09 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e80 e80: 3 total, 3 up, 3 in
Nov 23 20:44:09 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 80 pg[10.8( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=80 pruub=8.231281281s) [1] r=-1 lpr=80 pi=[57,80)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 222.432464600s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:09 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 80 pg[10.8( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=80 pruub=8.231248856s) [1] r=-1 lpr=80 pi=[57,80)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.432464600s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:09 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 80 pg[10.18( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=80 pruub=8.231938362s) [1] r=-1 lpr=80 pi=[57,80)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 222.433715820s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:09 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 80 pg[10.18( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=80 pruub=8.231907845s) [1] r=-1 lpr=80 pi=[57,80)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.433715820s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:09 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 80 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=79/80 n=5 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79) [0] r=0 lpr=79 pi=[65,79)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:44:09 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 80 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=79/80 n=6 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79) [0] r=0 lpr=79 pi=[65,79)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:44:09 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 80 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=79/80 n=6 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79) [0] r=0 lpr=79 pi=[65,79)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:44:09 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 80 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=79/80 n=5 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79) [0] r=0 lpr=79 pi=[65,79)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:44:09 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 80 pg[6.8( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=80) [0] r=0 lpr=80 pi=[53,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:09 compute-1 ceph-mon[80135]: osdmap e79: 3 total, 3 up, 3 in
Nov 23 20:44:09 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:09 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:09 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 23 20:44:09 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:44:09 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 20:44:09 compute-1 ceph-mon[80135]: Updating compute-0:/etc/ceph/ceph.conf
Nov 23 20:44:09 compute-1 ceph-mon[80135]: Updating compute-1:/etc/ceph/ceph.conf
Nov 23 20:44:09 compute-1 ceph-mon[80135]: Updating compute-2:/etc/ceph/ceph.conf
Nov 23 20:44:09 compute-1 ceph-mon[80135]: 11.13 scrub starts
Nov 23 20:44:09 compute-1 ceph-mon[80135]: 11.13 scrub ok
Nov 23 20:44:09 compute-1 ceph-mon[80135]: 7.3 scrub starts
Nov 23 20:44:09 compute-1 ceph-mon[80135]: 7.3 scrub ok
Nov 23 20:44:09 compute-1 ceph-mon[80135]: 9.e scrub starts
Nov 23 20:44:09 compute-1 ceph-mon[80135]: 9.e scrub ok
Nov 23 20:44:09 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 23 20:44:09 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 23 20:44:09 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 23 20:44:09 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 23 20:44:09 compute-1 ceph-mon[80135]: osdmap e80: 3 total, 3 up, 3 in
Nov 23 20:44:09 compute-1 sudo[88008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf.new
Nov 23 20:44:09 compute-1 sudo[88008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:09 compute-1 sudo[88008]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:09.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:09 compute-1 sudo[88033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf.new
Nov 23 20:44:09 compute-1 sudo[88033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:09 compute-1 sudo[88033]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:09 compute-1 sudo[88058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf.new /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 20:44:09 compute-1 sudo[88058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:09 compute-1 sudo[88058]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:09 compute-1 sudo[88083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 23 20:44:09 compute-1 sudo[88083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:09 compute-1 sudo[88083]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:09 compute-1 sudo[88109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph
Nov 23 20:44:09 compute-1 sudo[88109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:09 compute-1 sudo[88109]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:44:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:09.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:44:09 compute-1 sudo[88134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.client.admin.keyring.new
Nov 23 20:44:09 compute-1 sudo[88134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:09 compute-1 sudo[88134]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:09 compute-1 sudo[88159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:44:09 compute-1 sudo[88159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:09 compute-1 sudo[88159]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:09 compute-1 sudo[88184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.client.admin.keyring.new
Nov 23 20:44:09 compute-1 sudo[88184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:09 compute-1 sudo[88184]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:09 compute-1 sudo[88232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.client.admin.keyring.new
Nov 23 20:44:09 compute-1 sudo[88232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:09 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 23 20:44:09 compute-1 sudo[88232]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:09 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 23 20:44:09 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e81 e81: 3 total, 3 up, 3 in
Nov 23 20:44:09 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 81 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=79/63 les/c/f=80/64/0 sis=81) [0] r=0 lpr=81 pi=[63,81)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:09 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 81 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=79/63 les/c/f=80/64/0 sis=81) [0] r=0 lpr=81 pi=[63,81)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:09 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 81 pg[10.17( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=79/63 les/c/f=80/64/0 sis=81) [0] r=0 lpr=81 pi=[63,81)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:09 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 81 pg[10.8( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=81) [1]/[0] r=0 lpr=81 pi=[57,81)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:09 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 81 pg[10.17( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=79/63 les/c/f=80/64/0 sis=81) [0] r=0 lpr=81 pi=[63,81)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:09 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 81 pg[10.8( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=81) [1]/[0] r=0 lpr=81 pi=[57,81)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:09 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 81 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=79/63 les/c/f=80/64/0 sis=81) [0] r=0 lpr=81 pi=[63,81)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:09 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 81 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=79/63 les/c/f=80/64/0 sis=81) [0] r=0 lpr=81 pi=[63,81)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:09 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 81 pg[10.18( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=81) [1]/[0] r=0 lpr=81 pi=[57,81)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:09 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 81 pg[10.18( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=81) [1]/[0] r=0 lpr=81 pi=[57,81)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:09 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 81 pg[10.7( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=79/64 les/c/f=80/65/0 sis=81) [0] r=0 lpr=81 pi=[64,81)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:09 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 81 pg[10.7( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=79/64 les/c/f=80/65/0 sis=81) [0] r=0 lpr=81 pi=[64,81)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:09 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 81 pg[6.8( v 49'39 (0'0,49'39] local-lis/les=80/81 n=0 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=80) [0] r=0 lpr=80 pi=[53,80)/1 crt=49'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:44:09 compute-1 sudo[88257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.client.admin.keyring.new
Nov 23 20:44:09 compute-1 sudo[88257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:09 compute-1 sudo[88257]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:09 compute-1 sudo[88282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 23 20:44:09 compute-1 sudo[88282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:09 compute-1 sudo[88282]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:09 compute-1 sudo[88307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config
Nov 23 20:44:09 compute-1 sudo[88307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:09 compute-1 sudo[88307]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:10 compute-1 sudo[88332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config
Nov 23 20:44:10 compute-1 sudo[88332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:10 compute-1 sudo[88332]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:10 compute-1 sudo[88357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring.new
Nov 23 20:44:10 compute-1 sudo[88357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:10 compute-1 sudo[88357]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:10 compute-1 sudo[88382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:44:10 compute-1 sudo[88382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:10 compute-1 sudo[88382]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:10 compute-1 sudo[88407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring.new
Nov 23 20:44:10 compute-1 sudo[88407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:10 compute-1 sudo[88407]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:10 compute-1 sudo[88455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring.new
Nov 23 20:44:10 compute-1 sudo[88455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:10 compute-1 sudo[88455]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:10 compute-1 sudo[88480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring.new
Nov 23 20:44:10 compute-1 sudo[88480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:10 compute-1 sudo[88480]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:10 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500024d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:10 compute-1 sudo[88505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-03808be8-ae4a-5548-82e6-4a294f1bc627/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring.new /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 20:44:10 compute-1 sudo[88505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:10 compute-1 sudo[88505]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:10 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 23 20:44:10 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 23 20:44:10 compute-1 ceph-mon[80135]: Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 20:44:10 compute-1 ceph-mon[80135]: Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 20:44:10 compute-1 ceph-mon[80135]: pgmap v10: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:44:10 compute-1 ceph-mon[80135]: Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 20:44:10 compute-1 ceph-mon[80135]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 23 20:44:10 compute-1 ceph-mon[80135]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 23 20:44:10 compute-1 ceph-mon[80135]: 7.e deep-scrub starts
Nov 23 20:44:10 compute-1 ceph-mon[80135]: 7.e deep-scrub ok
Nov 23 20:44:10 compute-1 ceph-mon[80135]: 11.5 scrub starts
Nov 23 20:44:10 compute-1 ceph-mon[80135]: 11.5 scrub ok
Nov 23 20:44:10 compute-1 ceph-mon[80135]: osdmap e81: 3 total, 3 up, 3 in
Nov 23 20:44:10 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:10 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:10 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:10 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:10 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e82 e82: 3 total, 3 up, 3 in
Nov 23 20:44:10 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 82 pg[10.7( v 50'991 (0'0,50'991] local-lis/les=81/82 n=6 ec=57/44 lis/c=79/64 les/c/f=80/65/0 sis=81) [0] r=0 lpr=81 pi=[64,81)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:44:10 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 82 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=81/82 n=5 ec=57/44 lis/c=79/63 les/c/f=80/64/0 sis=81) [0] r=0 lpr=81 pi=[63,81)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:44:10 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 82 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=81/82 n=6 ec=57/44 lis/c=79/63 les/c/f=80/64/0 sis=81) [0] r=0 lpr=81 pi=[63,81)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:44:10 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 82 pg[10.17( v 50'991 (0'0,50'991] local-lis/les=81/82 n=5 ec=57/44 lis/c=79/63 les/c/f=80/64/0 sis=81) [0] r=0 lpr=81 pi=[63,81)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:44:11 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 82 pg[10.8( v 50'991 (0'0,50'991] local-lis/les=81/82 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=81) [1]/[0] async=[1] r=0 lpr=81 pi=[57,81)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:44:11 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 82 pg[10.18( v 50'991 (0'0,50'991] local-lis/les=81/82 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=81) [1]/[0] async=[1] r=0 lpr=81 pi=[57,81)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:44:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954004990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 23 20:44:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:11.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 23 20:44:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:11.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:11 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.1b deep-scrub starts
Nov 23 20:44:11 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.1b deep-scrub ok
Nov 23 20:44:11 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:44:11 compute-1 ceph-mon[80135]: Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 20:44:11 compute-1 ceph-mon[80135]: Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 20:44:11 compute-1 ceph-mon[80135]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 23 20:44:11 compute-1 ceph-mon[80135]: 12.12 scrub starts
Nov 23 20:44:11 compute-1 ceph-mon[80135]: 12.12 scrub ok
Nov 23 20:44:11 compute-1 ceph-mon[80135]: 8.4 scrub starts
Nov 23 20:44:11 compute-1 ceph-mon[80135]: 8.4 scrub ok
Nov 23 20:44:11 compute-1 ceph-mon[80135]: osdmap e82: 3 total, 3 up, 3 in
Nov 23 20:44:11 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:11 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:11 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:11 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:11 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 20:44:11 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 20:44:11 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:44:11 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e83 e83: 3 total, 3 up, 3 in
Nov 23 20:44:11 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 83 pg[10.18( v 50'991 (0'0,50'991] local-lis/les=81/82 n=5 ec=57/44 lis/c=81/57 les/c/f=82/58/0 sis=83 pruub=15.369442940s) [1] async=[1] r=-1 lpr=83 pi=[57,83)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 232.239044189s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:11 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 83 pg[10.8( v 50'991 (0'0,50'991] local-lis/les=81/82 n=6 ec=57/44 lis/c=81/57 les/c/f=82/58/0 sis=83 pruub=15.369385719s) [1] async=[1] r=-1 lpr=83 pi=[57,83)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 232.238983154s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:11 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 83 pg[10.18( v 50'991 (0'0,50'991] local-lis/les=81/82 n=5 ec=57/44 lis/c=81/57 les/c/f=82/58/0 sis=83 pruub=15.369388580s) [1] r=-1 lpr=83 pi=[57,83)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 232.239044189s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:11 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 83 pg[10.8( v 50'991 (0'0,50'991] local-lis/les=81/82 n=6 ec=57/44 lis/c=81/57 les/c/f=82/58/0 sis=83 pruub=15.369308472s) [1] r=-1 lpr=83 pi=[57,83)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 232.238983154s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:12 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:12 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 23 20:44:12 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 23 20:44:12 compute-1 ceph-mon[80135]: Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 20:44:12 compute-1 ceph-mon[80135]: pgmap v14: 337 pgs: 4 remapped+peering, 4 peering, 329 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 15 op/s; 56 B/s, 5 objects/s recovering
Nov 23 20:44:12 compute-1 ceph-mon[80135]: 7.2 scrub starts
Nov 23 20:44:12 compute-1 ceph-mon[80135]: 7.2 scrub ok
Nov 23 20:44:12 compute-1 ceph-mon[80135]: 11.1b deep-scrub starts
Nov 23 20:44:12 compute-1 ceph-mon[80135]: 11.1b deep-scrub ok
Nov 23 20:44:12 compute-1 ceph-mon[80135]: osdmap e83: 3 total, 3 up, 3 in
Nov 23 20:44:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e84 e84: 3 total, 3 up, 3 in
Nov 23 20:44:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930000f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:13.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:13.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500024d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:13 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Nov 23 20:44:13 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Nov 23 20:44:13 compute-1 ceph-mon[80135]: 12.e deep-scrub starts
Nov 23 20:44:13 compute-1 ceph-mon[80135]: 12.e deep-scrub ok
Nov 23 20:44:13 compute-1 ceph-mon[80135]: 11.1d scrub starts
Nov 23 20:44:13 compute-1 ceph-mon[80135]: 11.1d scrub ok
Nov 23 20:44:13 compute-1 ceph-mon[80135]: osdmap e84: 3 total, 3 up, 3 in
Nov 23 20:44:13 compute-1 ceph-mon[80135]: pgmap v17: 337 pgs: 4 remapped+peering, 4 peering, 329 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:44:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:14 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500024d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:14 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.a scrub starts
Nov 23 20:44:14 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.a scrub ok
Nov 23 20:44:14 compute-1 ceph-mon[80135]: 12.a scrub starts
Nov 23 20:44:14 compute-1 ceph-mon[80135]: 12.a scrub ok
Nov 23 20:44:14 compute-1 ceph-mon[80135]: 11.1c scrub starts
Nov 23 20:44:14 compute-1 ceph-mon[80135]: 11.1c scrub ok
Nov 23 20:44:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:15.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:15.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930000f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:15 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 23 20:44:15 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 23 20:44:15 compute-1 ceph-mon[80135]: 12.c scrub starts
Nov 23 20:44:15 compute-1 ceph-mon[80135]: 12.c scrub ok
Nov 23 20:44:15 compute-1 ceph-mon[80135]: 9.a scrub starts
Nov 23 20:44:15 compute-1 ceph-mon[80135]: 9.a scrub ok
Nov 23 20:44:15 compute-1 ceph-mon[80135]: pgmap v18: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 147 B/s, 5 objects/s recovering
Nov 23 20:44:15 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 23 20:44:15 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 23 20:44:15 compute-1 ceph-mon[80135]: 9.b scrub starts
Nov 23 20:44:15 compute-1 ceph-mon[80135]: 9.b scrub ok
Nov 23 20:44:15 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e85 e85: 3 total, 3 up, 3 in
Nov 23 20:44:15 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 85 pg[10.19( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=85) [0] r=0 lpr=85 pi=[63,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:15 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 85 pg[10.9( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=64/64 les/c/f=65/65/0 sis=85) [0] r=0 lpr=85 pi=[64,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:16 compute-1 sudo[88534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:44:16 compute-1 sudo[88534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:16 compute-1 sudo[88534]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:16 compute-1 sudo[88559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 20:44:16 compute-1 sudo[88559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:16 compute-1 sudo[88559]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:16 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500024d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:16 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Nov 23 20:44:16 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Nov 23 20:44:16 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:44:16 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e86 e86: 3 total, 3 up, 3 in
Nov 23 20:44:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 86 pg[10.19( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=86) [0]/[2] r=-1 lpr=86 pi=[63,86)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 86 pg[10.9( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=64/64 les/c/f=65/65/0 sis=86) [0]/[2] r=-1 lpr=86 pi=[64,86)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 86 pg[10.9( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=64/64 les/c/f=65/65/0 sis=86) [0]/[2] r=-1 lpr=86 pi=[64,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:16 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 86 pg[10.19( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=86) [0]/[2] r=-1 lpr=86 pi=[63,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:16 compute-1 ceph-mon[80135]: 7.6 scrub starts
Nov 23 20:44:16 compute-1 ceph-mon[80135]: 7.6 scrub ok
Nov 23 20:44:16 compute-1 ceph-mon[80135]: 11.1e scrub starts
Nov 23 20:44:16 compute-1 ceph-mon[80135]: 11.1e scrub ok
Nov 23 20:44:16 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 23 20:44:16 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 23 20:44:16 compute-1 ceph-mon[80135]: osdmap e85: 3 total, 3 up, 3 in
Nov 23 20:44:16 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:16 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:16 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:16 compute-1 ceph-mon[80135]: 9.9 deep-scrub starts
Nov 23 20:44:16 compute-1 ceph-mon[80135]: 9.9 deep-scrub ok
Nov 23 20:44:16 compute-1 ceph-mon[80135]: Reconfiguring mon.compute-0 (monmap changed)...
Nov 23 20:44:16 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 23 20:44:16 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 23 20:44:16 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:44:16 compute-1 ceph-mon[80135]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 23 20:44:17 compute-1 sshd-session[88585]: Accepted publickey for zuul from 192.168.122.30 port 52886 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 20:44:17 compute-1 systemd-logind[793]: New session 38 of user zuul.
Nov 23 20:44:17 compute-1 systemd[1]: Started Session 38 of User zuul.
Nov 23 20:44:17 compute-1 sshd-session[88585]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:44:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500024d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:44:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:17.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:44:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:17.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:17 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Nov 23 20:44:17 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Nov 23 20:44:17 compute-1 ceph-mon[80135]: 12.b scrub starts
Nov 23 20:44:17 compute-1 ceph-mon[80135]: 12.b scrub ok
Nov 23 20:44:17 compute-1 ceph-mon[80135]: 9.12 scrub starts
Nov 23 20:44:17 compute-1 ceph-mon[80135]: 9.12 scrub ok
Nov 23 20:44:17 compute-1 ceph-mon[80135]: osdmap e86: 3 total, 3 up, 3 in
Nov 23 20:44:17 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:17 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:17 compute-1 ceph-mon[80135]: Reconfiguring mgr.compute-0.oyehye (monmap changed)...
Nov 23 20:44:17 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.oyehye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 23 20:44:17 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 20:44:17 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:44:17 compute-1 ceph-mon[80135]: Reconfiguring daemon mgr.compute-0.oyehye on compute-0
Nov 23 20:44:17 compute-1 ceph-mon[80135]: pgmap v21: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 148 B/s, 5 objects/s recovering
Nov 23 20:44:17 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 23 20:44:17 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 23 20:44:17 compute-1 ceph-mon[80135]: 11.17 scrub starts
Nov 23 20:44:17 compute-1 ceph-mon[80135]: 11.17 scrub ok
Nov 23 20:44:17 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:17 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:17 compute-1 ceph-mon[80135]: Reconfiguring crash.compute-0 (monmap changed)...
Nov 23 20:44:17 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 23 20:44:17 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:44:17 compute-1 ceph-mon[80135]: Reconfiguring daemon crash.compute-0 on compute-0
Nov 23 20:44:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e87 e87: 3 total, 3 up, 3 in
Nov 23 20:44:18 compute-1 python3.9[88739]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 23 20:44:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:18 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:18 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Nov 23 20:44:18 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Nov 23 20:44:18 compute-1 ceph-mon[80135]: 12.8 scrub starts
Nov 23 20:44:18 compute-1 ceph-mon[80135]: 12.8 scrub ok
Nov 23 20:44:18 compute-1 ceph-mon[80135]: 8.19 scrub starts
Nov 23 20:44:18 compute-1 ceph-mon[80135]: 8.19 scrub ok
Nov 23 20:44:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 23 20:44:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 23 20:44:18 compute-1 ceph-mon[80135]: osdmap e87: 3 total, 3 up, 3 in
Nov 23 20:44:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:44:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:18 compute-1 ceph-mon[80135]: Reconfiguring osd.1 (monmap changed)...
Nov 23 20:44:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 23 20:44:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:44:18 compute-1 ceph-mon[80135]: Reconfiguring daemon osd.1 on compute-0
Nov 23 20:44:18 compute-1 ceph-mon[80135]: 8.a deep-scrub starts
Nov 23 20:44:18 compute-1 ceph-mon[80135]: 8.a deep-scrub ok
Nov 23 20:44:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:19 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e88 e88: 3 total, 3 up, 3 in
Nov 23 20:44:19 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 88 pg[10.1a( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=87) [0] r=0 lpr=88 pi=[65,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:19 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 88 pg[10.9( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=86/64 les/c/f=87/65/0 sis=88) [0] r=0 lpr=88 pi=[64,88)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:19 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 88 pg[10.9( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=86/64 les/c/f=87/65/0 sis=88) [0] r=0 lpr=88 pi=[64,88)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:19 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 88 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=86/63 les/c/f=87/64/0 sis=88) [0] r=0 lpr=88 pi=[63,88)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:19 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 88 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=86/63 les/c/f=87/64/0 sis=88) [0] r=0 lpr=88 pi=[63,88)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:19 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 88 pg[10.a( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=87) [0] r=0 lpr=88 pi=[65,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:19 compute-1 python3.9[88913]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:44:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:19.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:44:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:19.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:44:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6950003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:19 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 23 20:44:19 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 23 20:44:20 compute-1 sudo[89068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdppglwicevrtncolmuqqinwqugyoktn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930659.946256-94-220117663696183/AnsiballZ_command.py'
Nov 23 20:44:20 compute-1 sudo[89068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:44:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:20 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69240016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204420 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 20:44:20 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 23 20:44:20 compute-1 python3.9[89070]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:44:20 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 23 20:44:20 compute-1 sudo[89068]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:21.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:21.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:21 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Nov 23 20:44:21 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Nov 23 20:44:21 compute-1 sudo[89222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raswpzudcgyjnkevqezlovqznllkcmmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930661.2681072-130-138505976896034/AnsiballZ_stat.py'
Nov 23 20:44:21 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:44:21 compute-1 sudo[89222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:44:22 compute-1 python3.9[89224]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:44:22 compute-1 sudo[89222]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:22 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e89 e89: 3 total, 3 up, 3 in
Nov 23 20:44:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 89 pg[10.1b( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=64/64 les/c/f=65/65/0 sis=89) [0] r=0 lpr=89 pi=[64,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 89 pg[10.b( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=89) [0] r=0 lpr=89 pi=[63,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 89 pg[10.a( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=89) [0]/[1] r=-1 lpr=89 pi=[65,89)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 89 pg[10.a( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=89) [0]/[1] r=-1 lpr=89 pi=[65,89)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 89 pg[10.1a( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=89) [0]/[1] r=-1 lpr=89 pi=[65,89)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 89 pg[10.1a( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=89) [0]/[1] r=-1 lpr=89 pi=[65,89)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 89 pg[6.b( v 49'39 (0'0,49'39] local-lis/les=67/68 n=1 ec=53/18 lis/c=67/67 les/c/f=68/68/0 sis=89 pruub=13.820906639s) [1] r=-1 lpr=89 pi=[67,89)/1 crt=49'39 mlcod 49'39 active pruub 241.261001587s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 89 pg[6.b( v 49'39 (0'0,49'39] local-lis/les=67/68 n=1 ec=53/18 lis/c=67/67 les/c/f=68/68/0 sis=89 pruub=13.820881844s) [1] r=-1 lpr=89 pi=[67,89)/1 crt=49'39 mlcod 0'0 unknown NOTIFY pruub 241.261001587s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:22 compute-1 ceph-mon[80135]: 7.1b scrub starts
Nov 23 20:44:22 compute-1 ceph-mon[80135]: 7.1b scrub ok
Nov 23 20:44:22 compute-1 ceph-mon[80135]: 8.12 scrub starts
Nov 23 20:44:22 compute-1 ceph-mon[80135]: 8.12 scrub ok
Nov 23 20:44:22 compute-1 ceph-mon[80135]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Nov 23 20:44:22 compute-1 ceph-mon[80135]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Nov 23 20:44:22 compute-1 ceph-mon[80135]: osdmap e88: 3 total, 3 up, 3 in
Nov 23 20:44:22 compute-1 ceph-mon[80135]: pgmap v24: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:44:22 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 23 20:44:22 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 23 20:44:22 compute-1 ceph-mon[80135]: 11.16 deep-scrub starts
Nov 23 20:44:22 compute-1 ceph-mon[80135]: 11.16 deep-scrub ok
Nov 23 20:44:22 compute-1 ceph-mon[80135]: 12.10 scrub starts
Nov 23 20:44:22 compute-1 ceph-mon[80135]: 12.10 scrub ok
Nov 23 20:44:22 compute-1 ceph-mon[80135]: 11.1a scrub starts
Nov 23 20:44:22 compute-1 ceph-mon[80135]: 11.1a scrub ok
Nov 23 20:44:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 89 pg[10.9( v 50'991 (0'0,50'991] local-lis/les=88/89 n=6 ec=57/44 lis/c=86/64 les/c/f=87/65/0 sis=88) [0] r=0 lpr=88 pi=[64,88)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:44:22 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 89 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=88/89 n=5 ec=57/44 lis/c=86/63 les/c/f=87/64/0 sis=88) [0] r=0 lpr=88 pi=[63,88)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:44:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:22 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6950003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:22 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 23 20:44:22 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 23 20:44:23 compute-1 sudo[89376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csaanbkjmbdgrowqhyyufuaylkhqczzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930662.8101432-163-225249113525845/AnsiballZ_file.py'
Nov 23 20:44:23 compute-1 sudo[89376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:44:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69240016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:23.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:23 compute-1 python3.9[89378]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:44:23 compute-1 sudo[89376]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:23 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e90 e90: 3 total, 3 up, 3 in
Nov 23 20:44:23 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 90 pg[10.b( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=90) [0]/[2] r=-1 lpr=90 pi=[63,90)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:23 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 90 pg[10.b( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=90) [0]/[2] r=-1 lpr=90 pi=[63,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:23 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 90 pg[10.1b( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=64/64 les/c/f=65/65/0 sis=90) [0]/[2] r=-1 lpr=90 pi=[64,90)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:23 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 90 pg[10.1b( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=64/64 les/c/f=65/65/0 sis=90) [0]/[2] r=-1 lpr=90 pi=[64,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:23 compute-1 ceph-mon[80135]: 9.3 scrub starts
Nov 23 20:44:23 compute-1 ceph-mon[80135]: 9.3 scrub ok
Nov 23 20:44:23 compute-1 ceph-mon[80135]: 10.12 scrub starts
Nov 23 20:44:23 compute-1 ceph-mon[80135]: 9.f scrub starts
Nov 23 20:44:23 compute-1 ceph-mon[80135]: 9.f scrub ok
Nov 23 20:44:23 compute-1 ceph-mon[80135]: pgmap v25: 337 pgs: 2 unknown, 2 peering, 333 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 0 objects/s recovering
Nov 23 20:44:23 compute-1 ceph-mon[80135]: 9.18 scrub starts
Nov 23 20:44:23 compute-1 ceph-mon[80135]: 9.18 scrub ok
Nov 23 20:44:23 compute-1 ceph-mon[80135]: 10.12 scrub ok
Nov 23 20:44:23 compute-1 ceph-mon[80135]: 10.d deep-scrub starts
Nov 23 20:44:23 compute-1 ceph-mon[80135]: 11.14 scrub starts
Nov 23 20:44:23 compute-1 ceph-mon[80135]: 11.14 scrub ok
Nov 23 20:44:23 compute-1 ceph-mon[80135]: 8.16 scrub starts
Nov 23 20:44:23 compute-1 ceph-mon[80135]: 8.16 scrub ok
Nov 23 20:44:23 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 23 20:44:23 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 23 20:44:23 compute-1 ceph-mon[80135]: 10.d deep-scrub ok
Nov 23 20:44:23 compute-1 ceph-mon[80135]: osdmap e89: 3 total, 3 up, 3 in
Nov 23 20:44:23 compute-1 ceph-mon[80135]: 6.6 scrub starts
Nov 23 20:44:23 compute-1 ceph-mon[80135]: 6.6 scrub ok
Nov 23 20:44:23 compute-1 ceph-mon[80135]: 8.1b scrub starts
Nov 23 20:44:23 compute-1 ceph-mon[80135]: 8.1b scrub ok
Nov 23 20:44:23 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:23 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:23 compute-1 ceph-mon[80135]: osdmap e90: 3 total, 3 up, 3 in
Nov 23 20:44:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:23.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:23 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Nov 23 20:44:23 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Nov 23 20:44:24 compute-1 sudo[89529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygyqrjkknqzieaqvnpggueevaysucudb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930663.851095-190-79953658793990/AnsiballZ_file.py'
Nov 23 20:44:24 compute-1 sudo[89529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:44:24 compute-1 python3.9[89531]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:44:24 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e91 e91: 3 total, 3 up, 3 in
Nov 23 20:44:24 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 91 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=89/65 les/c/f=90/66/0 sis=91) [0] r=0 lpr=91 pi=[65,91)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:24 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 91 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=89/65 les/c/f=90/66/0 sis=91) [0] r=0 lpr=91 pi=[65,91)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:24 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 91 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=89/65 les/c/f=90/66/0 sis=91) [0] r=0 lpr=91 pi=[65,91)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:24 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 91 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=89/65 les/c/f=90/66/0 sis=91) [0] r=0 lpr=91 pi=[65,91)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:24 compute-1 sudo[89529]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:24 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:25 compute-1 python3.9[89681]: ansible-ansible.builtin.service_facts Invoked
Nov 23 20:44:25 compute-1 network[89698]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 20:44:25 compute-1 network[89699]: 'network-scripts' will be removed from distribution in near future.
Nov 23 20:44:25 compute-1 network[89700]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 20:44:25 compute-1 ceph-mon[80135]: pgmap v27: 337 pgs: 2 unknown, 2 peering, 333 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 23 20:44:25 compute-1 ceph-mon[80135]: 11.3 scrub starts
Nov 23 20:44:25 compute-1 ceph-mon[80135]: 11.3 scrub ok
Nov 23 20:44:25 compute-1 ceph-mon[80135]: Reconfiguring grafana.compute-0 (dependencies changed)...
Nov 23 20:44:25 compute-1 ceph-mon[80135]: Reconfiguring daemon grafana.compute-0 on compute-0
Nov 23 20:44:25 compute-1 ceph-mon[80135]: 8.18 scrub starts
Nov 23 20:44:25 compute-1 ceph-mon[80135]: 8.18 scrub ok
Nov 23 20:44:25 compute-1 ceph-mon[80135]: osdmap e91: 3 total, 3 up, 3 in
Nov 23 20:44:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:44:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:25.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:44:25 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e92 e92: 3 total, 3 up, 3 in
Nov 23 20:44:25 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 92 pg[10.b( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=90/63 les/c/f=91/64/0 sis=92) [0] r=0 lpr=92 pi=[63,92)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:25 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 92 pg[10.b( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=90/63 les/c/f=91/64/0 sis=92) [0] r=0 lpr=92 pi=[63,92)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:25 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 92 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=90/64 les/c/f=91/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:25 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 92 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=90/64 les/c/f=91/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:25 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 92 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=91/92 n=6 ec=57/44 lis/c=89/65 les/c/f=90/66/0 sis=91) [0] r=0 lpr=91 pi=[65,91)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:44:25 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 92 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=91/92 n=5 ec=57/44 lis/c=89/65 les/c/f=90/66/0 sis=91) [0] r=0 lpr=91 pi=[65,91)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:44:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:25.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69240016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:25 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.a scrub starts
Nov 23 20:44:25 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.a scrub ok
Nov 23 20:44:26 compute-1 ceph-mon[80135]: 12.3 deep-scrub starts
Nov 23 20:44:26 compute-1 ceph-mon[80135]: 12.3 deep-scrub ok
Nov 23 20:44:26 compute-1 ceph-mon[80135]: pgmap v30: 337 pgs: 2 remapped+peering, 2 active+remapped, 333 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:44:26 compute-1 ceph-mon[80135]: osdmap e92: 3 total, 3 up, 3 in
Nov 23 20:44:26 compute-1 ceph-mon[80135]: 10.a scrub starts
Nov 23 20:44:26 compute-1 ceph-mon[80135]: 10.a scrub ok
Nov 23 20:44:26 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e93 e93: 3 total, 3 up, 3 in
Nov 23 20:44:26 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 93 pg[10.b( v 50'991 (0'0,50'991] local-lis/les=92/93 n=6 ec=57/44 lis/c=90/63 les/c/f=91/64/0 sis=92) [0] r=0 lpr=92 pi=[63,92)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:44:26 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 93 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=92/93 n=5 ec=57/44 lis/c=90/64 les/c/f=91/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:44:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:26 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:26 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Nov 23 20:44:26 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Nov 23 20:44:26 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:44:26 compute-1 sudo[89730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:44:26 compute-1 sudo[89730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:26 compute-1 sudo[89730]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:26 compute-1 sudo[89755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:44:26 compute-1 sudo[89755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204426 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 20:44:27 compute-1 podman[89797]: 2025-11-23 20:44:27.268561845 +0000 UTC m=+0.037536109 container create 6e60179de44c5833bfa82dd587ae259ce5325384a435a670b16d6fce8fae7371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mcclintock, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 20:44:27 compute-1 systemd[82658]: Starting Mark boot as successful...
Nov 23 20:44:27 compute-1 systemd[82658]: Finished Mark boot as successful.
Nov 23 20:44:27 compute-1 systemd[1]: Started libpod-conmon-6e60179de44c5833bfa82dd587ae259ce5325384a435a670b16d6fce8fae7371.scope.
Nov 23 20:44:27 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:44:27 compute-1 podman[89797]: 2025-11-23 20:44:27.338424152 +0000 UTC m=+0.107398446 container init 6e60179de44c5833bfa82dd587ae259ce5325384a435a670b16d6fce8fae7371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mcclintock, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 20:44:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:27 compute-1 podman[89797]: 2025-11-23 20:44:27.345995168 +0000 UTC m=+0.114969432 container start 6e60179de44c5833bfa82dd587ae259ce5325384a435a670b16d6fce8fae7371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 23 20:44:27 compute-1 podman[89797]: 2025-11-23 20:44:27.250913478 +0000 UTC m=+0.019887762 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:44:27 compute-1 podman[89797]: 2025-11-23 20:44:27.350859909 +0000 UTC m=+0.119834173 container attach 6e60179de44c5833bfa82dd587ae259ce5325384a435a670b16d6fce8fae7371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mcclintock, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 20:44:27 compute-1 nifty_mcclintock[89815]: 167 167
Nov 23 20:44:27 compute-1 systemd[1]: libpod-6e60179de44c5833bfa82dd587ae259ce5325384a435a670b16d6fce8fae7371.scope: Deactivated successfully.
Nov 23 20:44:27 compute-1 podman[89797]: 2025-11-23 20:44:27.351999686 +0000 UTC m=+0.120973940 container died 6e60179de44c5833bfa82dd587ae259ce5325384a435a670b16d6fce8fae7371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 20:44:27 compute-1 systemd[1]: var-lib-containers-storage-overlay-275b54b567039eb4177f36567a9dfde2e1eaa5c6e4149fa94ea2672acf77d657-merged.mount: Deactivated successfully.
Nov 23 20:44:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:27.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:27 compute-1 podman[89797]: 2025-11-23 20:44:27.391331439 +0000 UTC m=+0.160305713 container remove 6e60179de44c5833bfa82dd587ae259ce5325384a435a670b16d6fce8fae7371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mcclintock, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 23 20:44:27 compute-1 systemd[1]: libpod-conmon-6e60179de44c5833bfa82dd587ae259ce5325384a435a670b16d6fce8fae7371.scope: Deactivated successfully.
Nov 23 20:44:27 compute-1 sudo[89755]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:27 compute-1 ceph-mon[80135]: osdmap e93: 3 total, 3 up, 3 in
Nov 23 20:44:27 compute-1 ceph-mon[80135]: 10.1a scrub starts
Nov 23 20:44:27 compute-1 ceph-mon[80135]: 10.1a scrub ok
Nov 23 20:44:27 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:27 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:27 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 23 20:44:27 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:44:27 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:27 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:27 compute-1 sudo[89845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:44:27 compute-1 sudo[89845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:27 compute-1 sudo[89845]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:27 compute-1 sudo[89874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:44:27 compute-1 sudo[89874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:27.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:27 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.b scrub starts
Nov 23 20:44:27 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.b scrub ok
Nov 23 20:44:27 compute-1 podman[89933]: 2025-11-23 20:44:27.894667859 +0000 UTC m=+0.044864120 container create 8075d0eb802171a3dc0839ee6a5b6c0dec3fed263a88e1fd797d8fb92119ffb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_feynman, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 23 20:44:27 compute-1 systemd[1]: Started libpod-conmon-8075d0eb802171a3dc0839ee6a5b6c0dec3fed263a88e1fd797d8fb92119ffb9.scope.
Nov 23 20:44:27 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:44:27 compute-1 podman[89933]: 2025-11-23 20:44:27.878606082 +0000 UTC m=+0.028802353 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:44:27 compute-1 podman[89933]: 2025-11-23 20:44:27.978751357 +0000 UTC m=+0.128947638 container init 8075d0eb802171a3dc0839ee6a5b6c0dec3fed263a88e1fd797d8fb92119ffb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 23 20:44:27 compute-1 podman[89933]: 2025-11-23 20:44:27.986882138 +0000 UTC m=+0.137078399 container start 8075d0eb802171a3dc0839ee6a5b6c0dec3fed263a88e1fd797d8fb92119ffb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_feynman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 23 20:44:27 compute-1 bold_feynman[89954]: 167 167
Nov 23 20:44:27 compute-1 podman[89933]: 2025-11-23 20:44:27.990169389 +0000 UTC m=+0.140365690 container attach 8075d0eb802171a3dc0839ee6a5b6c0dec3fed263a88e1fd797d8fb92119ffb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_feynman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 20:44:27 compute-1 systemd[1]: libpod-8075d0eb802171a3dc0839ee6a5b6c0dec3fed263a88e1fd797d8fb92119ffb9.scope: Deactivated successfully.
Nov 23 20:44:27 compute-1 podman[89933]: 2025-11-23 20:44:27.992709242 +0000 UTC m=+0.142905533 container died 8075d0eb802171a3dc0839ee6a5b6c0dec3fed263a88e1fd797d8fb92119ffb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_feynman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 20:44:28 compute-1 systemd[1]: var-lib-containers-storage-overlay-e20ace3896bd55337672ed393162a8bc4f8a3bcce828a2a1163677e8572a70ad-merged.mount: Deactivated successfully.
Nov 23 20:44:28 compute-1 podman[89933]: 2025-11-23 20:44:28.029726976 +0000 UTC m=+0.179923227 container remove 8075d0eb802171a3dc0839ee6a5b6c0dec3fed263a88e1fd797d8fb92119ffb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_feynman, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 20:44:28 compute-1 systemd[1]: libpod-conmon-8075d0eb802171a3dc0839ee6a5b6c0dec3fed263a88e1fd797d8fb92119ffb9.scope: Deactivated successfully.
Nov 23 20:44:28 compute-1 sudo[89874]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:28 compute-1 sudo[89994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:44:28 compute-1 sudo[89994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:28 compute-1 sudo[89994]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:28 compute-1 sudo[90023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 20:44:28 compute-1 sudo[90023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:28 compute-1 ceph-mon[80135]: Reconfiguring crash.compute-1 (monmap changed)...
Nov 23 20:44:28 compute-1 ceph-mon[80135]: Reconfiguring daemon crash.compute-1 on compute-1
Nov 23 20:44:28 compute-1 ceph-mon[80135]: pgmap v33: 337 pgs: 2 remapped+peering, 2 active+remapped, 333 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:44:28 compute-1 ceph-mon[80135]: Reconfiguring osd.0 (monmap changed)...
Nov 23 20:44:28 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 23 20:44:28 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:44:28 compute-1 ceph-mon[80135]: Reconfiguring daemon osd.0 on compute-1
Nov 23 20:44:28 compute-1 ceph-mon[80135]: 10.b scrub starts
Nov 23 20:44:28 compute-1 ceph-mon[80135]: 10.b scrub ok
Nov 23 20:44:28 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:28 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:28 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 23 20:44:28 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 23 20:44:28 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:44:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:28 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:28 compute-1 podman[90081]: 2025-11-23 20:44:28.546984601 +0000 UTC m=+0.035404636 container create 00e525469ad496afa043a9d32e76681d448577b0ba8cf92ca04d84400c1c98e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_leavitt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 23 20:44:28 compute-1 systemd[1]: Started libpod-conmon-00e525469ad496afa043a9d32e76681d448577b0ba8cf92ca04d84400c1c98e8.scope.
Nov 23 20:44:28 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:44:28 compute-1 podman[90081]: 2025-11-23 20:44:28.614052359 +0000 UTC m=+0.102472394 container init 00e525469ad496afa043a9d32e76681d448577b0ba8cf92ca04d84400c1c98e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 23 20:44:28 compute-1 podman[90081]: 2025-11-23 20:44:28.619217876 +0000 UTC m=+0.107637911 container start 00e525469ad496afa043a9d32e76681d448577b0ba8cf92ca04d84400c1c98e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_leavitt, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 23 20:44:28 compute-1 podman[90081]: 2025-11-23 20:44:28.622463206 +0000 UTC m=+0.110883271 container attach 00e525469ad496afa043a9d32e76681d448577b0ba8cf92ca04d84400c1c98e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_leavitt, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 20:44:28 compute-1 quizzical_leavitt[90102]: 167 167
Nov 23 20:44:28 compute-1 systemd[1]: libpod-00e525469ad496afa043a9d32e76681d448577b0ba8cf92ca04d84400c1c98e8.scope: Deactivated successfully.
Nov 23 20:44:28 compute-1 podman[90081]: 2025-11-23 20:44:28.623993984 +0000 UTC m=+0.112414059 container died 00e525469ad496afa043a9d32e76681d448577b0ba8cf92ca04d84400c1c98e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_leavitt, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 23 20:44:28 compute-1 podman[90081]: 2025-11-23 20:44:28.531824886 +0000 UTC m=+0.020244951 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:44:28 compute-1 systemd[1]: var-lib-containers-storage-overlay-27eb6c87b561fc09f9d4f766a41daaa687926dcc9350bd0e32180b5ee2f77e02-merged.mount: Deactivated successfully.
Nov 23 20:44:28 compute-1 podman[90081]: 2025-11-23 20:44:28.659493141 +0000 UTC m=+0.147913176 container remove 00e525469ad496afa043a9d32e76681d448577b0ba8cf92ca04d84400c1c98e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 20:44:28 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 23 20:44:28 compute-1 systemd[1]: libpod-conmon-00e525469ad496afa043a9d32e76681d448577b0ba8cf92ca04d84400c1c98e8.scope: Deactivated successfully.
Nov 23 20:44:28 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 23 20:44:28 compute-1 sudo[90023]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:44:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:29.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:44:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:44:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:29.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:44:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:29 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Nov 23 20:44:29 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Nov 23 20:44:29 compute-1 ceph-mon[80135]: Reconfiguring mon.compute-1 (monmap changed)...
Nov 23 20:44:29 compute-1 ceph-mon[80135]: Reconfiguring daemon mon.compute-1 on compute-1
Nov 23 20:44:29 compute-1 ceph-mon[80135]: 10.8 scrub starts
Nov 23 20:44:29 compute-1 ceph-mon[80135]: 10.8 scrub ok
Nov 23 20:44:29 compute-1 ceph-mon[80135]: 10.1b scrub starts
Nov 23 20:44:29 compute-1 ceph-mon[80135]: 10.1b scrub ok
Nov 23 20:44:29 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:29 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:29 compute-1 ceph-mon[80135]: Reconfiguring mon.compute-2 (monmap changed)...
Nov 23 20:44:29 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 23 20:44:29 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 23 20:44:29 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:44:29 compute-1 ceph-mon[80135]: Reconfiguring daemon mon.compute-2 on compute-2
Nov 23 20:44:29 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:29 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:29 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.jtkauz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 23 20:44:29 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 20:44:29 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:44:29 compute-1 python3.9[90271]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:44:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:30 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 20:44:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:30 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500046d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:30 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.10 deep-scrub starts
Nov 23 20:44:30 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.10 deep-scrub ok
Nov 23 20:44:30 compute-1 ceph-mon[80135]: pgmap v34: 337 pgs: 2 remapped+peering, 2 active+remapped, 333 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:44:30 compute-1 ceph-mon[80135]: Reconfiguring mgr.compute-2.jtkauz (monmap changed)...
Nov 23 20:44:30 compute-1 ceph-mon[80135]: Reconfiguring daemon mgr.compute-2.jtkauz on compute-2
Nov 23 20:44:30 compute-1 ceph-mon[80135]: 6.e scrub starts
Nov 23 20:44:30 compute-1 ceph-mon[80135]: 6.e scrub ok
Nov 23 20:44:30 compute-1 ceph-mon[80135]: 9.6 scrub starts
Nov 23 20:44:30 compute-1 ceph-mon[80135]: 9.6 scrub ok
Nov 23 20:44:30 compute-1 ceph-mon[80135]: 10.4 scrub starts
Nov 23 20:44:30 compute-1 ceph-mon[80135]: 10.4 scrub ok
Nov 23 20:44:30 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:30 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:30 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Nov 23 20:44:30 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Nov 23 20:44:30 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Nov 23 20:44:30 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:30 compute-1 python3.9[90421]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:44:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:31 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924002b10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:31.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:31.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:31 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:31 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Nov 23 20:44:31 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Nov 23 20:44:31 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e94 e94: 3 total, 3 up, 3 in
Nov 23 20:44:31 compute-1 ceph-mon[80135]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Nov 23 20:44:31 compute-1 ceph-mon[80135]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Nov 23 20:44:31 compute-1 ceph-mon[80135]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Nov 23 20:44:31 compute-1 ceph-mon[80135]: 10.5 scrub starts
Nov 23 20:44:31 compute-1 ceph-mon[80135]: 10.5 scrub ok
Nov 23 20:44:31 compute-1 ceph-mon[80135]: 8.10 deep-scrub starts
Nov 23 20:44:31 compute-1 ceph-mon[80135]: 8.10 deep-scrub ok
Nov 23 20:44:31 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 23 20:44:31 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 23 20:44:31 compute-1 ceph-mon[80135]: 10.13 scrub starts
Nov 23 20:44:31 compute-1 ceph-mon[80135]: 10.13 scrub ok
Nov 23 20:44:31 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 94 pg[10.1c( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=72/72 les/c/f=73/73/0 sis=94) [0] r=0 lpr=94 pi=[72,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:31 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 94 pg[10.c( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=72/72 les/c/f=73/73/0 sis=94) [0] r=0 lpr=94 pi=[72,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:31 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:44:32 compute-1 python3.9[90576]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:44:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:32 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003c30 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:32 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Nov 23 20:44:32 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Nov 23 20:44:32 compute-1 ceph-mon[80135]: pgmap v35: 337 pgs: 337 active+clean; 457 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:44:32 compute-1 ceph-mon[80135]: 10.18 scrub starts
Nov 23 20:44:32 compute-1 ceph-mon[80135]: 9.11 scrub starts
Nov 23 20:44:32 compute-1 ceph-mon[80135]: 10.18 scrub ok
Nov 23 20:44:32 compute-1 ceph-mon[80135]: 9.11 scrub ok
Nov 23 20:44:32 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 23 20:44:32 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 23 20:44:32 compute-1 ceph-mon[80135]: osdmap e94: 3 total, 3 up, 3 in
Nov 23 20:44:32 compute-1 ceph-mon[80135]: 10.1 deep-scrub starts
Nov 23 20:44:32 compute-1 ceph-mon[80135]: 10.1 deep-scrub ok
Nov 23 20:44:32 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e95 e95: 3 total, 3 up, 3 in
Nov 23 20:44:32 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 95 pg[10.c( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=72/72 les/c/f=73/73/0 sis=95) [0]/[2] r=-1 lpr=95 pi=[72,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:32 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 95 pg[10.c( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=72/72 les/c/f=73/73/0 sis=95) [0]/[2] r=-1 lpr=95 pi=[72,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:32 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 95 pg[10.1c( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=72/72 les/c/f=73/73/0 sis=95) [0]/[2] r=-1 lpr=95 pi=[72,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:32 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 95 pg[10.1c( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=72/72 les/c/f=73/73/0 sis=95) [0]/[2] r=-1 lpr=95 pi=[72,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:33 compute-1 sudo[90732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvwclejixmfnivlffnrnhzzuapadmowl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930672.8780885-334-176999484275298/AnsiballZ_setup.py'
Nov 23 20:44:33 compute-1 sudo[90732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:44:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500046d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 20:44:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:44:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:33.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:33 compute-1 python3.9[90734]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 20:44:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:44:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:33.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:44:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924002b10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:33 compute-1 sudo[90732]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:33 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Nov 23 20:44:33 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Nov 23 20:44:33 compute-1 ceph-mon[80135]: 10.2 deep-scrub starts
Nov 23 20:44:33 compute-1 ceph-mon[80135]: 10.17 scrub starts
Nov 23 20:44:33 compute-1 ceph-mon[80135]: 10.17 scrub ok
Nov 23 20:44:33 compute-1 ceph-mon[80135]: 10.2 deep-scrub ok
Nov 23 20:44:33 compute-1 ceph-mon[80135]: osdmap e95: 3 total, 3 up, 3 in
Nov 23 20:44:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 23 20:44:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 23 20:44:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:44:33 compute-1 ceph-mon[80135]: 10.3 scrub starts
Nov 23 20:44:33 compute-1 ceph-mon[80135]: 10.3 scrub ok
Nov 23 20:44:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:44:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 20:44:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 23 20:44:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 23 20:44:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 20:44:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 20:44:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:44:33 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e96 e96: 3 total, 3 up, 3 in
Nov 23 20:44:34 compute-1 sudo[90817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veqdoscjcpbjilqpmavwvkihxvrfyszh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930672.8780885-334-176999484275298/AnsiballZ_dnf.py'
Nov 23 20:44:34 compute-1 sudo[90817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:44:34 compute-1 python3.9[90819]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 20:44:34 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:34 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:34 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Nov 23 20:44:34 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Nov 23 20:44:34 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 96 pg[10.d( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=73/73 les/c/f=74/74/0 sis=96) [0] r=0 lpr=96 pi=[73,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:34 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 96 pg[10.1d( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=73/73 les/c/f=74/74/0 sis=96) [0] r=0 lpr=96 pi=[73,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:34 compute-1 ceph-mon[80135]: pgmap v38: 337 pgs: 337 active+clean; 457 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:44:34 compute-1 ceph-mon[80135]: pgmap v39: 337 pgs: 337 active+clean; 457 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:44:34 compute-1 ceph-mon[80135]: 10.15 scrub starts
Nov 23 20:44:34 compute-1 ceph-mon[80135]: 10.15 scrub ok
Nov 23 20:44:34 compute-1 ceph-mon[80135]: 10.16 scrub starts
Nov 23 20:44:34 compute-1 ceph-mon[80135]: 10.16 scrub ok
Nov 23 20:44:34 compute-1 ceph-mon[80135]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Nov 23 20:44:34 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 23 20:44:34 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 23 20:44:34 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 23 20:44:34 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 23 20:44:34 compute-1 ceph-mon[80135]: osdmap e96: 3 total, 3 up, 3 in
Nov 23 20:44:34 compute-1 ceph-mon[80135]: 10.11 scrub starts
Nov 23 20:44:34 compute-1 ceph-mon[80135]: 10.11 scrub ok
Nov 23 20:44:34 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e97 e97: 3 total, 3 up, 3 in
Nov 23 20:44:34 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 97 pg[10.d( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=73/73 les/c/f=74/74/0 sis=97) [0]/[1] r=-1 lpr=97 pi=[73,97)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:34 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 97 pg[10.d( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=73/73 les/c/f=74/74/0 sis=97) [0]/[1] r=-1 lpr=97 pi=[73,97)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:34 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 97 pg[10.1d( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=73/73 les/c/f=74/74/0 sis=97) [0]/[1] r=-1 lpr=97 pi=[73,97)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:34 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 97 pg[10.1d( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=73/73 les/c/f=74/74/0 sis=97) [0]/[1] r=-1 lpr=97 pi=[73,97)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:34 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 97 pg[10.c( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=95/72 les/c/f=96/73/0 sis=97) [0] r=0 lpr=97 pi=[72,97)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:34 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 97 pg[10.c( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=95/72 les/c/f=96/73/0 sis=97) [0] r=0 lpr=97 pi=[72,97)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:34 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 97 pg[10.1c( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=95/72 les/c/f=96/73/0 sis=97) [0] r=0 lpr=97 pi=[72,97)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:34 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 97 pg[10.1c( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=95/72 les/c/f=96/73/0 sis=97) [0] r=0 lpr=97 pi=[72,97)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:35 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003c50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:35.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:44:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:35.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:44:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:35 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500046d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:35 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.e scrub starts
Nov 23 20:44:35 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.e scrub ok
Nov 23 20:44:35 compute-1 ceph-mon[80135]: 10.1d scrub starts
Nov 23 20:44:35 compute-1 ceph-mon[80135]: 10.1d scrub ok
Nov 23 20:44:35 compute-1 ceph-mon[80135]: 10.0 scrub starts
Nov 23 20:44:35 compute-1 ceph-mon[80135]: 10.0 scrub ok
Nov 23 20:44:35 compute-1 ceph-mon[80135]: osdmap e97: 3 total, 3 up, 3 in
Nov 23 20:44:35 compute-1 ceph-mon[80135]: 6.1 scrub starts
Nov 23 20:44:35 compute-1 ceph-mon[80135]: 6.1 scrub ok
Nov 23 20:44:35 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 23 20:44:35 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 23 20:44:35 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e98 e98: 3 total, 3 up, 3 in
Nov 23 20:44:35 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 98 pg[10.c( v 50'991 (0'0,50'991] local-lis/les=97/98 n=6 ec=57/44 lis/c=95/72 les/c/f=96/73/0 sis=97) [0] r=0 lpr=97 pi=[72,97)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:44:35 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 98 pg[6.e( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=76/76 les/c/f=77/77/0 sis=98) [0] r=0 lpr=98 pi=[76,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:35 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 98 pg[10.1c( v 50'991 (0'0,50'991] local-lis/les=97/98 n=5 ec=57/44 lis/c=95/72 les/c/f=96/73/0 sis=97) [0] r=0 lpr=97 pi=[72,97)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:44:36 compute-1 sudo[90878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:44:36 compute-1 sudo[90878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:36 compute-1 sudo[90878]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:36 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 20:44:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:36 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:36 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.f scrub starts
Nov 23 20:44:36 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.f scrub ok
Nov 23 20:44:36 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:44:36 compute-1 ceph-mon[80135]: pgmap v42: 337 pgs: 2 active+remapped, 335 active+clean; 458 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:44:36 compute-1 ceph-mon[80135]: 10.e scrub starts
Nov 23 20:44:36 compute-1 ceph-mon[80135]: 10.e scrub ok
Nov 23 20:44:36 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 23 20:44:36 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 23 20:44:36 compute-1 ceph-mon[80135]: osdmap e98: 3 total, 3 up, 3 in
Nov 23 20:44:36 compute-1 ceph-mon[80135]: 10.14 scrub starts
Nov 23 20:44:36 compute-1 ceph-mon[80135]: 10.14 scrub ok
Nov 23 20:44:36 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e99 e99: 3 total, 3 up, 3 in
Nov 23 20:44:36 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 99 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=0/0 n=8 ec=57/44 lis/c=97/73 les/c/f=98/74/0 sis=99) [0] r=0 lpr=99 pi=[73,99)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:36 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 99 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=0/0 n=8 ec=57/44 lis/c=97/73 les/c/f=98/74/0 sis=99) [0] r=0 lpr=99 pi=[73,99)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:36 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 99 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=97/73 les/c/f=98/74/0 sis=99) [0] r=0 lpr=99 pi=[73,99)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:36 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 99 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=97/73 les/c/f=98/74/0 sis=99) [0] r=0 lpr=99 pi=[73,99)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:36 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 99 pg[6.e( v 49'39 lc 48'19 (0'0,49'39] local-lis/les=98/99 n=1 ec=53/18 lis/c=76/76 les/c/f=77/77/0 sis=98) [0] r=0 lpr=98 pi=[76,98)/1 crt=49'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:44:37 compute-1 sshd-session[90849]: Connection closed by authenticating user root 80.94.95.116 port 39806 [preauth]
Nov 23 20:44:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:44:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:37.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:44:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:37.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003c50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:37 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 23 20:44:37 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 23 20:44:37 compute-1 ceph-mon[80135]: 10.f scrub starts
Nov 23 20:44:37 compute-1 ceph-mon[80135]: 10.f scrub ok
Nov 23 20:44:37 compute-1 ceph-mon[80135]: osdmap e99: 3 total, 3 up, 3 in
Nov 23 20:44:37 compute-1 ceph-mon[80135]: pgmap v45: 337 pgs: 2 active+remapped, 335 active+clean; 458 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:44:37 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 23 20:44:37 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 23 20:44:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e100 e100: 3 total, 3 up, 3 in
Nov 23 20:44:37 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 100 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=81/82 n=7 ec=57/44 lis/c=81/81 les/c/f=82/82/0 sis=100 pruub=12.921210289s) [2] r=-1 lpr=100 pi=[81,100)/1 crt=50'991 mlcod 0'0 active pruub 255.856842041s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:37 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 100 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=81/82 n=7 ec=57/44 lis/c=81/81 les/c/f=82/82/0 sis=100 pruub=12.921073914s) [2] r=-1 lpr=100 pi=[81,100)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 255.856842041s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:37 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 100 pg[6.f( v 49'39 (0'0,49'39] local-lis/les=67/68 n=3 ec=53/18 lis/c=67/67 les/c/f=68/68/0 sis=100 pruub=14.325005531s) [1] r=-1 lpr=100 pi=[67,100)/1 crt=49'39 mlcod 49'39 active pruub 257.261383057s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:37 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 100 pg[6.f( v 49'39 (0'0,49'39] local-lis/les=67/68 n=3 ec=53/18 lis/c=67/67 les/c/f=68/68/0 sis=100 pruub=14.324955940s) [1] r=-1 lpr=100 pi=[67,100)/1 crt=49'39 mlcod 0'0 unknown NOTIFY pruub 257.261383057s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:37 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 100 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=81/82 n=5 ec=57/44 lis/c=81/81 les/c/f=82/82/0 sis=100 pruub=12.913941383s) [2] r=-1 lpr=100 pi=[81,100)/1 crt=50'991 mlcod 0'0 active pruub 255.850540161s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:37 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 100 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=81/82 n=5 ec=57/44 lis/c=81/81 les/c/f=82/82/0 sis=100 pruub=12.913920403s) [2] r=-1 lpr=100 pi=[81,100)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 255.850540161s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:37 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 100 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=99/100 n=8 ec=57/44 lis/c=97/73 les/c/f=98/74/0 sis=99) [0] r=0 lpr=99 pi=[73,99)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:44:37 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 100 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=99/100 n=5 ec=57/44 lis/c=97/73 les/c/f=98/74/0 sis=99) [0] r=0 lpr=99 pi=[73,99)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:44:38 compute-1 sudo[90912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 20:44:38 compute-1 sudo[90912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:38 compute-1 sudo[90912]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:38 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500046d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:38 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.c scrub starts
Nov 23 20:44:38 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.c scrub ok
Nov 23 20:44:38 compute-1 ceph-mon[80135]: 10.6 scrub starts
Nov 23 20:44:38 compute-1 ceph-mon[80135]: 10.6 scrub ok
Nov 23 20:44:38 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 23 20:44:38 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 23 20:44:38 compute-1 ceph-mon[80135]: osdmap e100: 3 total, 3 up, 3 in
Nov 23 20:44:38 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:38 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:38 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e101 e101: 3 total, 3 up, 3 in
Nov 23 20:44:38 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 101 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=81/82 n=7 ec=57/44 lis/c=81/81 les/c/f=82/82/0 sis=101) [2]/[0] r=0 lpr=101 pi=[81,101)/1 crt=50'991 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:38 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 101 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=81/82 n=7 ec=57/44 lis/c=81/81 les/c/f=82/82/0 sis=101) [2]/[0] r=0 lpr=101 pi=[81,101)/1 crt=50'991 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:38 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 101 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=81/82 n=5 ec=57/44 lis/c=81/81 les/c/f=82/82/0 sis=101) [2]/[0] r=0 lpr=101 pi=[81,101)/1 crt=50'991 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:38 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 101 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=81/82 n=5 ec=57/44 lis/c=81/81 les/c/f=82/82/0 sis=101) [2]/[0] r=0 lpr=101 pi=[81,101)/1 crt=50'991 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:39 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:39.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:39 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 20:44:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:39.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:39 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:39 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Nov 23 20:44:39 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Nov 23 20:44:39 compute-1 ceph-mon[80135]: 10.c scrub starts
Nov 23 20:44:39 compute-1 ceph-mon[80135]: 10.c scrub ok
Nov 23 20:44:39 compute-1 ceph-mon[80135]: osdmap e101: 3 total, 3 up, 3 in
Nov 23 20:44:39 compute-1 ceph-mon[80135]: pgmap v48: 337 pgs: 2 remapped+peering, 1 active+recovering, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1/225 objects misplaced (0.444%)
Nov 23 20:44:39 compute-1 sshd-session[90942]: Received disconnect from 102.176.81.29 port 47838:11: Bye Bye [preauth]
Nov 23 20:44:39 compute-1 sshd-session[90942]: Disconnected from authenticating user root 102.176.81.29 port 47838 [preauth]
Nov 23 20:44:39 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e102 e102: 3 total, 3 up, 3 in
Nov 23 20:44:40 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 102 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=101/102 n=7 ec=57/44 lis/c=81/81 les/c/f=82/82/0 sis=101) [2]/[0] async=[2] r=0 lpr=101 pi=[81,101)/1 crt=50'991 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:44:40 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 102 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=101/102 n=5 ec=57/44 lis/c=81/81 les/c/f=82/82/0 sis=101) [2]/[0] async=[2] r=0 lpr=101 pi=[81,101)/1 crt=50'991 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:44:40 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e103 e103: 3 total, 3 up, 3 in
Nov 23 20:44:40 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 103 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=101/102 n=7 ec=57/44 lis/c=101/81 les/c/f=102/82/0 sis=103 pruub=15.661826134s) [2] async=[2] r=-1 lpr=103 pi=[81,103)/1 crt=50'991 mlcod 50'991 active pruub 261.000366211s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:40 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 103 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=101/102 n=7 ec=57/44 lis/c=101/81 les/c/f=102/82/0 sis=103 pruub=15.661749840s) [2] r=-1 lpr=103 pi=[81,103)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 261.000366211s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:40 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 103 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=101/102 n=5 ec=57/44 lis/c=101/81 les/c/f=102/82/0 sis=103 pruub=15.660367966s) [2] async=[2] r=-1 lpr=103 pi=[81,103)/1 crt=50'991 mlcod 50'991 active pruub 261.000366211s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:40 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 103 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=101/102 n=5 ec=57/44 lis/c=101/81 les/c/f=102/82/0 sis=103 pruub=15.660287857s) [2] r=-1 lpr=103 pi=[81,103)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 261.000366211s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:40 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003c70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:40 compute-1 sshd-session[90945]: Invalid user aaa from 34.91.0.68 port 46104
Nov 23 20:44:40 compute-1 sshd-session[90945]: Received disconnect from 34.91.0.68 port 46104:11: Bye Bye [preauth]
Nov 23 20:44:40 compute-1 sshd-session[90945]: Disconnected from invalid user aaa 34.91.0.68 port 46104 [preauth]
Nov 23 20:44:40 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 23 20:44:40 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 23 20:44:40 compute-1 ceph-mon[80135]: 10.19 scrub starts
Nov 23 20:44:40 compute-1 ceph-mon[80135]: 10.19 scrub ok
Nov 23 20:44:40 compute-1 ceph-mon[80135]: osdmap e102: 3 total, 3 up, 3 in
Nov 23 20:44:40 compute-1 ceph-mon[80135]: osdmap e103: 3 total, 3 up, 3 in
Nov 23 20:44:41 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e104 e104: 3 total, 3 up, 3 in
Nov 23 20:44:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:41 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500046d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:41.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:41.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:41 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:41 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Nov 23 20:44:41 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Nov 23 20:44:41 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:44:42 compute-1 ceph-mon[80135]: 10.1c scrub starts
Nov 23 20:44:42 compute-1 ceph-mon[80135]: 10.1c scrub ok
Nov 23 20:44:42 compute-1 ceph-mon[80135]: osdmap e104: 3 total, 3 up, 3 in
Nov 23 20:44:42 compute-1 ceph-mon[80135]: 10.1f deep-scrub starts
Nov 23 20:44:42 compute-1 ceph-mon[80135]: 10.1f deep-scrub ok
Nov 23 20:44:42 compute-1 ceph-mon[80135]: pgmap v52: 337 pgs: 2 remapped+peering, 1 active+recovering, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1/225 objects misplaced (0.444%)
Nov 23 20:44:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:42 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 20:44:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:42 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:44:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:42 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:42 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 23 20:44:42 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 23 20:44:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204442 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 20:44:42 compute-1 sshd-session[90948]: Invalid user cat from 118.145.189.160 port 53046
Nov 23 20:44:42 compute-1 sshd-session[90948]: Received disconnect from 118.145.189.160 port 53046:11: Bye Bye [preauth]
Nov 23 20:44:42 compute-1 sshd-session[90948]: Disconnected from invalid user cat 118.145.189.160 port 53046 [preauth]
Nov 23 20:44:43 compute-1 ceph-mon[80135]: 10.1e scrub starts
Nov 23 20:44:43 compute-1 ceph-mon[80135]: 10.1e scrub ok
Nov 23 20:44:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:43 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003c90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:43.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:43 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Nov 23 20:44:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:44:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:43.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:44:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:43 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500046d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:43 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Nov 23 20:44:44 compute-1 ceph-mon[80135]: 10.10 scrub starts
Nov 23 20:44:44 compute-1 ceph-mon[80135]: 10.10 scrub ok
Nov 23 20:44:44 compute-1 ceph-mon[80135]: pgmap v53: 337 pgs: 2 remapped+peering, 1 active+recovering, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1/225 objects misplaced (0.444%)
Nov 23 20:44:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:44 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:44 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Nov 23 20:44:44 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Nov 23 20:44:45 compute-1 ceph-mon[80135]: 10.9 scrub starts
Nov 23 20:44:45 compute-1 ceph-mon[80135]: 10.9 scrub ok
Nov 23 20:44:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:45 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:45.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:45 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 20:44:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:44:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:45.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:44:45 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 23 20:44:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:45 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003cb0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:45 compute-1 ceph-osd[77613]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 23 20:44:46 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e105 e105: 3 total, 3 up, 3 in
Nov 23 20:44:46 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 105 pg[10.10( v 50'991 (0'0,50'991] local-lis/les=57/58 n=2 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=105 pruub=11.388984680s) [2] r=-1 lpr=105 pi=[57,105)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 262.433959961s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:46 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 105 pg[10.10( v 50'991 (0'0,50'991] local-lis/les=57/58 n=2 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=105 pruub=11.388861656s) [2] r=-1 lpr=105 pi=[57,105)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 262.433959961s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:46 compute-1 ceph-mon[80135]: 10.7 scrub starts
Nov 23 20:44:46 compute-1 ceph-mon[80135]: 10.7 scrub ok
Nov 23 20:44:46 compute-1 ceph-mon[80135]: pgmap v54: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:44:46 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 23 20:44:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500046d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:46 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:44:47 compute-1 ceph-mon[80135]: 6.d scrub starts
Nov 23 20:44:47 compute-1 ceph-mon[80135]: 6.d scrub ok
Nov 23 20:44:47 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 23 20:44:47 compute-1 ceph-mon[80135]: osdmap e105: 3 total, 3 up, 3 in
Nov 23 20:44:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e106 e106: 3 total, 3 up, 3 in
Nov 23 20:44:47 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 106 pg[10.10( v 50'991 (0'0,50'991] local-lis/les=57/58 n=2 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=106) [2]/[0] r=0 lpr=106 pi=[57,106)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:47 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 106 pg[10.10( v 50'991 (0'0,50'991] local-lis/les=57/58 n=2 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=106) [2]/[0] r=0 lpr=106 pi=[57,106)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:44:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:47 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:47.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:47.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:47 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:48 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e107 e107: 3 total, 3 up, 3 in
Nov 23 20:44:48 compute-1 ceph-mon[80135]: osdmap e106: 3 total, 3 up, 3 in
Nov 23 20:44:48 compute-1 ceph-mon[80135]: pgmap v57: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:44:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 23 20:44:48 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 107 pg[10.10( v 50'991 (0'0,50'991] local-lis/les=106/107 n=2 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=106) [2]/[0] async=[2] r=0 lpr=106 pi=[57,106)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:44:48 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:48 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003cd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204449 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 20:44:49 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 23 20:44:49 compute-1 ceph-mon[80135]: osdmap e107: 3 total, 3 up, 3 in
Nov 23 20:44:49 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:44:49 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:44:49 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e108 e108: 3 total, 3 up, 3 in
Nov 23 20:44:49 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 108 pg[10.10( v 50'991 (0'0,50'991] local-lis/les=106/107 n=2 ec=57/44 lis/c=106/57 les/c/f=107/58/0 sis=108 pruub=14.884209633s) [2] async=[2] r=-1 lpr=108 pi=[57,108)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 269.114807129s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:44:49 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 108 pg[10.10( v 50'991 (0'0,50'991] local-lis/les=106/107 n=2 ec=57/44 lis/c=106/57 les/c/f=107/58/0 sis=108 pruub=14.884119987s) [2] r=-1 lpr=108 pi=[57,108)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 269.114807129s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:44:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954002d00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:49.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:49.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:50 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e109 e109: 3 total, 3 up, 3 in
Nov 23 20:44:50 compute-1 ceph-mon[80135]: osdmap e108: 3 total, 3 up, 3 in
Nov 23 20:44:50 compute-1 ceph-mon[80135]: pgmap v60: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:44:50 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 23 20:44:50 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e110 e110: 3 total, 3 up, 3 in
Nov 23 20:44:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:50 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003cd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:51.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:51 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 23 20:44:51 compute-1 ceph-mon[80135]: osdmap e109: 3 total, 3 up, 3 in
Nov 23 20:44:51 compute-1 ceph-mon[80135]: osdmap e110: 3 total, 3 up, 3 in
Nov 23 20:44:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:51.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954002d00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:51 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e111 e111: 3 total, 3 up, 3 in
Nov 23 20:44:51 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:44:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:52 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:52 compute-1 ceph-mon[80135]: pgmap v63: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:44:52 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 23 20:44:52 compute-1 ceph-mon[80135]: osdmap e111: 3 total, 3 up, 3 in
Nov 23 20:44:52 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e112 e112: 3 total, 3 up, 3 in
Nov 23 20:44:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:53 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:44:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:53.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:44:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:53.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:53 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003cf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:53 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e113 e113: 3 total, 3 up, 3 in
Nov 23 20:44:53 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 23 20:44:53 compute-1 ceph-mon[80135]: osdmap e112: 3 total, 3 up, 3 in
Nov 23 20:44:53 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 23 20:44:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:54 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954002d00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:54 compute-1 ceph-mon[80135]: pgmap v66: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:44:54 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 23 20:44:54 compute-1 ceph-mon[80135]: osdmap e113: 3 total, 3 up, 3 in
Nov 23 20:44:54 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e114 e114: 3 total, 3 up, 3 in
Nov 23 20:44:55 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e115 e115: 3 total, 3 up, 3 in
Nov 23 20:44:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:55.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:44:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:55.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:44:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:55 compute-1 ceph-mon[80135]: osdmap e114: 3 total, 3 up, 3 in
Nov 23 20:44:55 compute-1 ceph-mon[80135]: osdmap e115: 3 total, 3 up, 3 in
Nov 23 20:44:56 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e116 e116: 3 total, 3 up, 3 in
Nov 23 20:44:56 compute-1 sudo[91002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:44:56 compute-1 sudo[91002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:44:56 compute-1 sudo[91002]: pam_unix(sudo:session): session closed for user root
Nov 23 20:44:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:56 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:56 compute-1 ceph-mon[80135]: pgmap v70: 337 pgs: 1 unknown, 1 remapped+peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 28 B/s, 1 objects/s recovering
Nov 23 20:44:56 compute-1 ceph-mon[80135]: osdmap e116: 3 total, 3 up, 3 in
Nov 23 20:44:56 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:44:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e117 e117: 3 total, 3 up, 3 in
Nov 23 20:44:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954002d00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:57.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:44:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:44:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:57.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:44:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:58 compute-1 ceph-mon[80135]: osdmap e117: 3 total, 3 up, 3 in
Nov 23 20:44:58 compute-1 ceph-mon[80135]: pgmap v73: 337 pgs: 1 unknown, 1 remapped+peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 28 B/s, 1 objects/s recovering
Nov 23 20:44:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:58 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:59 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d30 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:44:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:59.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:44:59 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e118 e118: 3 total, 3 up, 3 in
Nov 23 20:44:59 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 23 20:44:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:59 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954002d00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:44:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:44:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:44:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:59.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:00 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:00 compute-1 ceph-mon[80135]: pgmap v74: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 638 B/s rd, 0 op/s; 22 B/s, 2 objects/s recovering
Nov 23 20:45:00 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 23 20:45:00 compute-1 ceph-mon[80135]: osdmap e118: 3 total, 3 up, 3 in
Nov 23 20:45:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:01 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954002d00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:01.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:01 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 23 20:45:01 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e119 e119: 3 total, 3 up, 3 in
Nov 23 20:45:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:01 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:01.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:01 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 20:45:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:02 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:02 compute-1 ceph-mon[80135]: pgmap v76: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Nov 23 20:45:02 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 23 20:45:02 compute-1 ceph-mon[80135]: osdmap e119: 3 total, 3 up, 3 in
Nov 23 20:45:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:03 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:45:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:03.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:45:03 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e120 e120: 3 total, 3 up, 3 in
Nov 23 20:45:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:45:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 23 20:45:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:03 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540043f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:03.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:04 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:04 compute-1 ceph-mon[80135]: pgmap v78: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 499 B/s rd, 0 op/s; 17 B/s, 1 objects/s recovering
Nov 23 20:45:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 23 20:45:04 compute-1 ceph-mon[80135]: osdmap e120: 3 total, 3 up, 3 in
Nov 23 20:45:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:05.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:05 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 23 20:45:05 compute-1 sshd-session[91059]: Invalid user local from 43.225.142.116 port 36332
Nov 23 20:45:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:05.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:05 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e121 e121: 3 total, 3 up, 3 in
Nov 23 20:45:05 compute-1 sshd-session[91059]: Received disconnect from 43.225.142.116 port 36332:11: Bye Bye [preauth]
Nov 23 20:45:05 compute-1 sshd-session[91059]: Disconnected from invalid user local 43.225.142.116 port 36332 [preauth]
Nov 23 20:45:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:06 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540043f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:06 compute-1 ceph-mon[80135]: pgmap v80: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:45:06 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 23 20:45:06 compute-1 ceph-mon[80135]: osdmap e121: 3 total, 3 up, 3 in
Nov 23 20:45:06 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:45:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:45:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:07.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:45:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:45:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:07.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:45:07 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 23 20:45:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e122 e122: 3 total, 3 up, 3 in
Nov 23 20:45:07 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 122 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=88/89 n=7 ec=57/44 lis/c=88/88 les/c/f=89/89/0 sis=122 pruub=10.758297920s) [1] r=-1 lpr=122 pi=[88,122)/1 crt=50'991 mlcod 0'0 active pruub 283.454803467s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:45:07 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 122 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=88/89 n=7 ec=57/44 lis/c=88/88 les/c/f=89/89/0 sis=122 pruub=10.758251190s) [1] r=-1 lpr=122 pi=[88,122)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 283.454803467s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:45:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:08 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:08 compute-1 ceph-mon[80135]: pgmap v82: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:45:08 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 23 20:45:08 compute-1 ceph-mon[80135]: osdmap e122: 3 total, 3 up, 3 in
Nov 23 20:45:08 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e123 e123: 3 total, 3 up, 3 in
Nov 23 20:45:08 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 123 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=88/89 n=7 ec=57/44 lis/c=88/88 les/c/f=89/89/0 sis=123) [1]/[0] r=0 lpr=123 pi=[88,123)/1 crt=50'991 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:45:08 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 123 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=88/89 n=7 ec=57/44 lis/c=88/88 les/c/f=89/89/0 sis=123) [1]/[0] r=0 lpr=123 pi=[88,123)/1 crt=50'991 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:45:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540043f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:09.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:45:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:09.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:45:09 compute-1 ceph-mon[80135]: osdmap e123: 3 total, 3 up, 3 in
Nov 23 20:45:09 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e124 e124: 3 total, 3 up, 3 in
Nov 23 20:45:09 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 124 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=123/124 n=7 ec=57/44 lis/c=88/88 les/c/f=89/89/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[88,123)/1 crt=50'991 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:45:10 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e125 e125: 3 total, 3 up, 3 in
Nov 23 20:45:10 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 125 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=123/124 n=7 ec=57/44 lis/c=123/88 les/c/f=124/89/0 sis=125 pruub=15.395946503s) [1] async=[1] r=-1 lpr=125 pi=[88,125)/1 crt=50'991 mlcod 50'991 active pruub 290.748016357s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:45:10 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 125 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=123/124 n=7 ec=57/44 lis/c=123/88 les/c/f=124/89/0 sis=125 pruub=15.395783424s) [1] r=-1 lpr=125 pi=[88,125)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 290.748016357s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:45:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:10 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:10 compute-1 ceph-mon[80135]: pgmap v85: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 522 B/s rd, 0 op/s
Nov 23 20:45:10 compute-1 ceph-mon[80135]: osdmap e124: 3 total, 3 up, 3 in
Nov 23 20:45:10 compute-1 ceph-mon[80135]: osdmap e125: 3 total, 3 up, 3 in
Nov 23 20:45:11 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e126 e126: 3 total, 3 up, 3 in
Nov 23 20:45:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:11.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540043f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:11.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:11 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:45:12 compute-1 ceph-mon[80135]: osdmap e126: 3 total, 3 up, 3 in
Nov 23 20:45:12 compute-1 ceph-mon[80135]: pgmap v89: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 806 B/s rd, 0 op/s
Nov 23 20:45:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:12 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:13.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:45:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:13.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:45:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:14 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540043f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:14 compute-1 ceph-mon[80135]: pgmap v90: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 640 B/s rd, 0 op/s
Nov 23 20:45:15 compute-1 sudo[90817]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:15.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:15 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e127 e127: 3 total, 3 up, 3 in
Nov 23 20:45:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:15 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 23 20:45:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:15.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:16 compute-1 sudo[91216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjjvddoldwybgosvxbhkgnpmyovhaxnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930715.8213072-370-58733655347598/AnsiballZ_command.py'
Nov 23 20:45:16 compute-1 sudo[91216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:45:16 compute-1 python3.9[91218]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:45:16 compute-1 sudo[91225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:45:16 compute-1 sudo[91225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:45:16 compute-1 sudo[91225]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:16 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:16 compute-1 ceph-mon[80135]: pgmap v91: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 23 20:45:16 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 23 20:45:16 compute-1 ceph-mon[80135]: osdmap e127: 3 total, 3 up, 3 in
Nov 23 20:45:16 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:45:17 compute-1 sudo[91216]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540043f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:17.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540043f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:17.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:17 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 23 20:45:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e128 e128: 3 total, 3 up, 3 in
Nov 23 20:45:17 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 128 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=92/93 n=2 ec=57/44 lis/c=92/92 les/c/f=93/93/0 sis=128 pruub=12.716535568s) [1] r=-1 lpr=128 pi=[92,128)/1 crt=50'991 mlcod 0'0 active pruub 295.470977783s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:45:17 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 128 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=92/93 n=2 ec=57/44 lis/c=92/92 les/c/f=93/93/0 sis=128 pruub=12.716499329s) [1] r=-1 lpr=128 pi=[92,128)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 295.470977783s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:45:18 compute-1 sudo[91530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqvnxsfpkawwbowzaijdfidungsrjfby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930717.6115859-394-151378332711823/AnsiballZ_selinux.py'
Nov 23 20:45:18 compute-1 sudo[91530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:45:18 compute-1 python3.9[91532]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 23 20:45:18 compute-1 sudo[91530]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:18 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:18 compute-1 ceph-mon[80135]: pgmap v93: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Nov 23 20:45:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 23 20:45:18 compute-1 ceph-mon[80135]: osdmap e128: 3 total, 3 up, 3 in
Nov 23 20:45:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:45:18 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e129 e129: 3 total, 3 up, 3 in
Nov 23 20:45:18 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 129 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=92/93 n=2 ec=57/44 lis/c=92/92 les/c/f=93/93/0 sis=129) [1]/[0] r=0 lpr=129 pi=[92,129)/1 crt=50'991 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:45:18 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 129 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=92/93 n=2 ec=57/44 lis/c=92/92 les/c/f=93/93/0 sis=129) [1]/[0] r=0 lpr=129 pi=[92,129)/1 crt=50'991 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:45:19 compute-1 sudo[91684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjckdcvicqqqasqtrzrehwvoqsrverxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930718.9754677-427-148187613251573/AnsiballZ_command.py'
Nov 23 20:45:19 compute-1 sudo[91684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:45:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:19 compute-1 python3.9[91686]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 23 20:45:19 compute-1 sudo[91684]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:19.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500020e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:19.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:19 compute-1 ceph-mon[80135]: osdmap e129: 3 total, 3 up, 3 in
Nov 23 20:45:19 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e130 e130: 3 total, 3 up, 3 in
Nov 23 20:45:19 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 130 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=129/130 n=2 ec=57/44 lis/c=92/92 les/c/f=93/93/0 sis=129) [1]/[0] async=[1] r=0 lpr=129 pi=[92,129)/1 crt=50'991 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:45:20 compute-1 sudo[91838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scbqagabyzfhisqeaoyydzixderdizkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930719.8062408-451-83068514523925/AnsiballZ_file.py'
Nov 23 20:45:20 compute-1 sudo[91838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:45:20 compute-1 python3.9[91840]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:45:20 compute-1 sudo[91838]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:20 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e131 e131: 3 total, 3 up, 3 in
Nov 23 20:45:20 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 131 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=129/130 n=2 ec=57/44 lis/c=129/92 les/c/f=130/93/0 sis=131 pruub=15.474392891s) [1] async=[1] r=-1 lpr=131 pi=[92,131)/1 crt=50'991 mlcod 50'991 active pruub 300.843353271s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:45:20 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 131 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=129/130 n=2 ec=57/44 lis/c=129/92 les/c/f=130/93/0 sis=131 pruub=15.474349976s) [1] r=-1 lpr=131 pi=[92,131)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 300.843353271s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:45:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:20 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930000f30 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:20 compute-1 sshd-session[91456]: Connection closed by authenticating user nobody 58.49.113.138 port 10049 [preauth]
Nov 23 20:45:20 compute-1 ceph-mon[80135]: pgmap v96: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Nov 23 20:45:20 compute-1 ceph-mon[80135]: osdmap e130: 3 total, 3 up, 3 in
Nov 23 20:45:20 compute-1 ceph-mon[80135]: osdmap e131: 3 total, 3 up, 3 in
Nov 23 20:45:21 compute-1 sudo[91990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szqjdwhbaakmsnvfzxvjyzmpclfiymdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930720.6506243-475-234233218232420/AnsiballZ_mount.py'
Nov 23 20:45:21 compute-1 sudo[91990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:45:21 compute-1 python3.9[91992]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 23 20:45:21 compute-1 sudo[91990]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:21 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e132 e132: 3 total, 3 up, 3 in
Nov 23 20:45:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:21.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c30 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:21.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:21 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:45:22 compute-1 ceph-mon[80135]: osdmap e132: 3 total, 3 up, 3 in
Nov 23 20:45:22 compute-1 ceph-mon[80135]: pgmap v100: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 817 B/s rd, 0 op/s
Nov 23 20:45:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:22 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500020e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:22 compute-1 sudo[92143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktmkgnuaqxxykneeydqoztvxjkrsksqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930722.3366961-559-94570918202648/AnsiballZ_file.py'
Nov 23 20:45:22 compute-1 sudo[92143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:45:22 compute-1 python3.9[92145]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:45:22 compute-1 sudo[92143]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930002a80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:23 compute-1 sudo[92295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxohhyqnopqjdmqdfbnvntzjrlexxjor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930723.1503286-583-281255115513761/AnsiballZ_stat.py'
Nov 23 20:45:23 compute-1 sudo[92295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:45:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:45:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:23.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:45:23 compute-1 python3.9[92297]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:45:23 compute-1 sudo[92295]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930002a80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:23.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:23 compute-1 sudo[92374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjbbyfuxirdoxnilezjxgptofqgqcwqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930723.1503286-583-281255115513761/AnsiballZ_file.py'
Nov 23 20:45:23 compute-1 sudo[92374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:45:24 compute-1 python3.9[92376]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:45:24 compute-1 sudo[92374]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:24 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:24 compute-1 ceph-mon[80135]: pgmap v101: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 651 B/s rd, 0 op/s
Nov 23 20:45:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204524 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 20:45:25 compute-1 sudo[92526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xccfgkdhnwtziwyivmbzdgcdqzkrzpwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930725.0911565-646-279805221752761/AnsiballZ_stat.py'
Nov 23 20:45:25 compute-1 sudo[92526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:45:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500020e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:45:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:25.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:45:25 compute-1 python3.9[92528]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:45:25 compute-1 sudo[92526]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:25 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 23 20:45:25 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e133 e133: 3 total, 3 up, 3 in
Nov 23 20:45:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500020e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:25.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:26 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:26 compute-1 ceph-mon[80135]: pgmap v102: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 23 20:45:26 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 23 20:45:26 compute-1 ceph-mon[80135]: osdmap e133: 3 total, 3 up, 3 in
Nov 23 20:45:26 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:45:26 compute-1 sudo[92681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbqivegygahtisvtjviczshozarlzxqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930726.4138424-685-24776846851418/AnsiballZ_getent.py'
Nov 23 20:45:26 compute-1 sudo[92681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:45:27 compute-1 python3.9[92683]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 23 20:45:27 compute-1 sudo[92681]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:27.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930002a80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:27.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:27 compute-1 sudo[92835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxjlnobdcugqayoazukbgionfktkwups ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930727.5837827-715-216474509823729/AnsiballZ_getent.py'
Nov 23 20:45:27 compute-1 sudo[92835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:45:28 compute-1 python3.9[92837]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 23 20:45:28 compute-1 sudo[92835]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:28 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 23 20:45:28 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e134 e134: 3 total, 3 up, 3 in
Nov 23 20:45:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:28 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500020e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:28 compute-1 sudo[92988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcbmbehwtqdjppozfvsmlaxaqasyaxam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930728.4311776-739-13566520080928/AnsiballZ_group.py'
Nov 23 20:45:28 compute-1 sudo[92988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:45:29 compute-1 ceph-mon[80135]: pgmap v104: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Nov 23 20:45:29 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 23 20:45:29 compute-1 ceph-mon[80135]: osdmap e134: 3 total, 3 up, 3 in
Nov 23 20:45:29 compute-1 python3.9[92990]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 23 20:45:29 compute-1 sudo[92988]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:45:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:29.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:45:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:29 compute-1 sudo[93141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgkgauvvulrtaxpswjmqaoykcxusigwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930729.4393747-766-60007607139175/AnsiballZ_file.py'
Nov 23 20:45:29 compute-1 sudo[93141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:45:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:29.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:29 compute-1 python3.9[93143]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 23 20:45:29 compute-1 sudo[93141]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:30 compute-1 ceph-mon[80135]: pgmap v106: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 13 B/s, 0 objects/s recovering
Nov 23 20:45:30 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 23 20:45:30 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e135 e135: 3 total, 3 up, 3 in
Nov 23 20:45:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 135 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=79/80 n=5 ec=57/44 lis/c=79/79 les/c/f=80/80/0 sis=135 pruub=14.998577118s) [2] r=-1 lpr=135 pi=[79,135)/1 crt=50'991 mlcod 0'0 active pruub 310.215698242s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:45:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 135 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=79/80 n=5 ec=57/44 lis/c=79/79 les/c/f=80/80/0 sis=135 pruub=14.998062134s) [2] r=-1 lpr=135 pi=[79,135)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 310.215698242s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:45:30 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e136 e136: 3 total, 3 up, 3 in
Nov 23 20:45:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 136 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=79/80 n=5 ec=57/44 lis/c=79/79 les/c/f=80/80/0 sis=136) [2]/[0] r=0 lpr=136 pi=[79,136)/1 crt=50'991 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:45:30 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 136 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=79/80 n=5 ec=57/44 lis/c=79/79 les/c/f=80/80/0 sis=136) [2]/[0] r=0 lpr=136 pi=[79,136)/1 crt=50'991 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 20:45:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:30 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930002a80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:30 compute-1 sudo[93293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipsvtybdmraeucdvejhiveykyrpdkfvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930730.6653326-799-92647486911285/AnsiballZ_dnf.py'
Nov 23 20:45:30 compute-1 sudo[93293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:45:31 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 23 20:45:31 compute-1 ceph-mon[80135]: osdmap e135: 3 total, 3 up, 3 in
Nov 23 20:45:31 compute-1 ceph-mon[80135]: osdmap e136: 3 total, 3 up, 3 in
Nov 23 20:45:31 compute-1 python3.9[93295]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 20:45:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:31 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500020e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:31.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:31 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e137 e137: 3 total, 3 up, 3 in
Nov 23 20:45:31 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 137 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=136/137 n=5 ec=57/44 lis/c=79/79 les/c/f=80/80/0 sis=136) [2]/[0] async=[2] r=0 lpr=136 pi=[79,136)/1 crt=50'991 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:45:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:31 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:31.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:31 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:45:32 compute-1 sudo[93293]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:32 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e138 e138: 3 total, 3 up, 3 in
Nov 23 20:45:32 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 138 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=136/137 n=5 ec=57/44 lis/c=136/79 les/c/f=137/80/0 sis=138 pruub=15.027297020s) [2] async=[2] r=-1 lpr=138 pi=[79,138)/1 crt=50'991 mlcod 50'991 active pruub 312.487762451s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:45:32 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 138 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=136/137 n=5 ec=57/44 lis/c=136/79 les/c/f=137/80/0 sis=138 pruub=15.027228355s) [2] r=-1 lpr=138 pi=[79,138)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 312.487762451s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 20:45:32 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 138 pg[10.1f( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=103/103 les/c/f=104/104/0 sis=138) [0] r=0 lpr=138 pi=[103,138)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:45:32 compute-1 ceph-mon[80135]: osdmap e137: 3 total, 3 up, 3 in
Nov 23 20:45:32 compute-1 ceph-mon[80135]: pgmap v110: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 23 20:45:32 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 20:45:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:32 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003cb0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:33 compute-1 sudo[93447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkksgqdgxugthbnheulgicmeioqyubto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930732.759782-823-218293162248028/AnsiballZ_file.py'
Nov 23 20:45:33 compute-1 sudo[93447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:45:33 compute-1 python3.9[93449]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:45:33 compute-1 sudo[93447]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930003570 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:33 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e139 e139: 3 total, 3 up, 3 in
Nov 23 20:45:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:33.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:33 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 139 pg[10.1f( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=103/103 les/c/f=104/104/0 sis=139) [0]/[2] r=-1 lpr=139 pi=[103,139)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:45:33 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 139 pg[10.1f( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=103/103 les/c/f=104/104/0 sis=139) [0]/[2] r=-1 lpr=139 pi=[103,139)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 20:45:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 20:45:33 compute-1 ceph-mon[80135]: osdmap e138: 3 total, 3 up, 3 in
Nov 23 20:45:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:45:33 compute-1 ceph-mon[80135]: osdmap e139: 3 total, 3 up, 3 in
Nov 23 20:45:33 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Nov 23 20:45:33 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:33.663246) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 20:45:33 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Nov 23 20:45:33 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930733663489, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 3142, "num_deletes": 252, "total_data_size": 10546717, "memory_usage": 10929232, "flush_reason": "Manual Compaction"}
Nov 23 20:45:33 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Nov 23 20:45:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500044e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:33.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:33 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930733778937, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 6620082, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7544, "largest_seqno": 10681, "table_properties": {"data_size": 6606243, "index_size": 8861, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3909, "raw_key_size": 34783, "raw_average_key_size": 22, "raw_value_size": 6576054, "raw_average_value_size": 4270, "num_data_blocks": 384, "num_entries": 1540, "num_filter_entries": 1540, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930629, "oldest_key_time": 1763930629, "file_creation_time": 1763930733, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 23 20:45:33 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 115723 microseconds, and 10998 cpu microseconds.
Nov 23 20:45:33 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 20:45:33 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:33.778977) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 6620082 bytes OK
Nov 23 20:45:33 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:33.778995) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Nov 23 20:45:33 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:33.780330) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Nov 23 20:45:33 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:33.780349) EVENT_LOG_v1 {"time_micros": 1763930733780343, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 20:45:33 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:33.780367) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 20:45:33 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 10531575, prev total WAL file size 10531575, number of live WAL files 2.
Nov 23 20:45:33 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 20:45:33 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:33.782780) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 23 20:45:33 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 20:45:33 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(6464KB)], [18(11MB)]
Nov 23 20:45:33 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930733782820, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 18327825, "oldest_snapshot_seqno": -1}
Nov 23 20:45:33 compute-1 sudo[93600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rygjbhhprlzxjmiuaggybtmahpupsqqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930733.587765-847-151841980211/AnsiballZ_stat.py'
Nov 23 20:45:33 compute-1 sudo[93600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:45:33 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 4076 keys, 13908069 bytes, temperature: kUnknown
Nov 23 20:45:33 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930733992008, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 13908069, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13875473, "index_size": 21286, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10245, "raw_key_size": 104122, "raw_average_key_size": 25, "raw_value_size": 13795477, "raw_average_value_size": 3384, "num_data_blocks": 915, "num_entries": 4076, "num_filter_entries": 4076, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763930733, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Nov 23 20:45:33 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 20:45:33 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:33.992192) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 13908069 bytes
Nov 23 20:45:33 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:33.999342) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 87.6 rd, 66.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(6.3, 11.2 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(4.9) write-amplify(2.1) OK, records in: 4614, records dropped: 538 output_compression: NoCompression
Nov 23 20:45:33 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:33.999366) EVENT_LOG_v1 {"time_micros": 1763930733999356, "job": 8, "event": "compaction_finished", "compaction_time_micros": 209237, "compaction_time_cpu_micros": 27320, "output_level": 6, "num_output_files": 1, "total_output_size": 13908069, "num_input_records": 4614, "num_output_records": 4076, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 20:45:34 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 20:45:34 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930734000540, "job": 8, "event": "table_file_deletion", "file_number": 20}
Nov 23 20:45:34 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 20:45:34 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930734002647, "job": 8, "event": "table_file_deletion", "file_number": 18}
Nov 23 20:45:34 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:33.782700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:45:34 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:34.002685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:45:34 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:34.002690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:45:34 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:34.002691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:45:34 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:34.002696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:45:34 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:34.002698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:45:34 compute-1 python3.9[93602]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:45:34 compute-1 sudo[93600]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:34 compute-1 sudo[93678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjllwznufpvagmfuxzfsghmrcecvyuuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930733.587765-847-151841980211/AnsiballZ_file.py'
Nov 23 20:45:34 compute-1 sudo[93678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:45:34 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e140 e140: 3 total, 3 up, 3 in
Nov 23 20:45:34 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:34 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:34 compute-1 python3.9[93680]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:45:34 compute-1 sudo[93678]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:34 compute-1 ceph-mon[80135]: pgmap v113: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 23 20:45:34 compute-1 ceph-mon[80135]: osdmap e140: 3 total, 3 up, 3 in
Nov 23 20:45:34 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:34 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 20:45:35 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e141 e141: 3 total, 3 up, 3 in
Nov 23 20:45:35 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 141 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=139/103 les/c/f=140/104/0 sis=141) [0] r=0 lpr=141 pi=[103,141)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 20:45:35 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 141 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=139/103 les/c/f=140/104/0 sis=141) [0] r=0 lpr=141 pi=[103,141)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 20:45:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:35 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003cd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:35 compute-1 sudo[93830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rypxqolwbwsqzixumoafvssmmrygawfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930735.1332202-886-164441045039103/AnsiballZ_stat.py'
Nov 23 20:45:35 compute-1 sudo[93830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:45:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:35.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:35 compute-1 python3.9[93832]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:45:35 compute-1 sudo[93830]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:35 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930003570 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:45:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:35.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:45:35 compute-1 sudo[93909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnqrtjmbzuhsxiwugxqpvopiodcfwdek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930735.1332202-886-164441045039103/AnsiballZ_file.py'
Nov 23 20:45:35 compute-1 sudo[93909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:45:36 compute-1 python3.9[93911]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:45:36 compute-1 sudo[93909]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:36 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 e142: 3 total, 3 up, 3 in
Nov 23 20:45:36 compute-1 ceph-osd[77613]: osd.0 pg_epoch: 142 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=141/142 n=5 ec=57/44 lis/c=139/103 les/c/f=140/104/0 sis=141) [0] r=0 lpr=141 pi=[103,141)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 20:45:36 compute-1 ceph-mon[80135]: osdmap e141: 3 total, 3 up, 3 in
Nov 23 20:45:36 compute-1 ceph-mon[80135]: pgmap v116: 337 pgs: 1 remapped+peering, 336 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s; 0 B/s, 1 objects/s recovering
Nov 23 20:45:36 compute-1 ceph-mon[80135]: mgrmap e35: compute-0.oyehye(active, since 92s), standbys: compute-2.jtkauz, compute-1.kgyerp
Nov 23 20:45:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:36 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500044e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:36 compute-1 sudo[93936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:45:36 compute-1 sudo[93936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:45:36 compute-1 sudo[93936]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:36 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:45:37 compute-1 sudo[94086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkttlfzuccrxlpfyidgqwvgjchtwmemd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930736.8656929-931-39364523228956/AnsiballZ_dnf.py'
Nov 23 20:45:37 compute-1 sudo[94086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:45:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:37 compute-1 python3.9[94088]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 20:45:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:37.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:37.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 20:45:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:45:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:38 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:45:38 compute-1 ceph-mon[80135]: osdmap e142: 3 total, 3 up, 3 in
Nov 23 20:45:38 compute-1 sudo[94091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:45:38 compute-1 sudo[94091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:45:38 compute-1 sudo[94091]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:38 compute-1 sudo[94116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 20:45:38 compute-1 sudo[94116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:45:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:38 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:38 compute-1 sudo[94086]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:38 compute-1 sudo[94116]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:39 compute-1 ceph-mon[80135]: pgmap v118: 337 pgs: 1 remapped+peering, 336 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1010 B/s wr, 2 op/s; 0 B/s, 1 objects/s recovering
Nov 23 20:45:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:39 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500044e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:39.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:39 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:39.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:39 compute-1 python3.9[94322]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:45:40 compute-1 ceph-mon[80135]: pgmap v119: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 2.3 KiB/s wr, 7 op/s; 18 B/s, 1 objects/s recovering
Nov 23 20:45:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:40 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930003570 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:40 compute-1 python3.9[94474]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 23 20:45:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:41 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 20:45:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:41 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:41.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:41 compute-1 python3.9[94624]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:45:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:41 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500044e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:45:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:41.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:45:41 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:45:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:42 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:42 compute-1 ceph-mon[80135]: pgmap v120: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.4 KiB/s wr, 5 op/s; 15 B/s, 0 objects/s recovering
Nov 23 20:45:42 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:45:42 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:45:42 compute-1 sudo[94775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhzscfzikscexsmszzklfeujqzsyuaie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930742.353985-1054-171539632839733/AnsiballZ_systemd.py'
Nov 23 20:45:42 compute-1 sudo[94775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:45:43 compute-1 python3.9[94777]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:45:43 compute-1 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 23 20:45:43 compute-1 systemd[1]: tuned.service: Deactivated successfully.
Nov 23 20:45:43 compute-1 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 23 20:45:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:43 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930004280 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:43 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 23 20:45:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:43.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:43 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:43.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:43 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 23 20:45:43 compute-1 sudo[94775]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:43 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:45:43 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 20:45:43 compute-1 ceph-mon[80135]: pgmap v121: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.4 KiB/s wr, 4 op/s; 15 B/s, 0 objects/s recovering
Nov 23 20:45:43 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:45:43 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:45:43 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 20:45:43 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 20:45:43 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:45:44 compute-1 python3.9[94939]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 23 20:45:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:44 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500044e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:44 compute-1 ceph-mon[80135]: Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s))
Nov 23 20:45:44 compute-1 ceph-mon[80135]: Cluster is now healthy
Nov 23 20:45:44 compute-1 ceph-mon[80135]: pgmap v122: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.2 KiB/s wr, 4 op/s; 12 B/s, 0 objects/s recovering
Nov 23 20:45:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:45 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500044e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:45:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:45.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:45:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:45 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930004280 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:45.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204546 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 20:45:46 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:45:47 compute-1 ceph-mon[80135]: pgmap v123: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1.1 KiB/s wr, 3 op/s; 10 B/s, 0 objects/s recovering
Nov 23 20:45:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:47 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:47.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:47 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:47.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:48 compute-1 sudo[95091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kikttfzevaegzccziheygfzvsejxpipt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930748.017153-1225-197149841436098/AnsiballZ_systemd.py'
Nov 23 20:45:48 compute-1 sudo[95091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:45:48 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:48 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930004280 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:48 compute-1 python3.9[95093]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:45:48 compute-1 sudo[95091]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:49 compute-1 sudo[95247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heigbtdzopfhxmymgvttdrqpurwwgtmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930748.778368-1225-245522853878013/AnsiballZ_systemd.py'
Nov 23 20:45:49 compute-1 sudo[95247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:45:49 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:45:49 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:45:49 compute-1 ceph-mon[80135]: pgmap v124: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1012 B/s wr, 3 op/s; 9 B/s, 0 objects/s recovering
Nov 23 20:45:49 compute-1 sshd-session[95096]: Received disconnect from 34.91.0.68 port 48086:11: Bye Bye [preauth]
Nov 23 20:45:49 compute-1 sshd-session[95096]: Disconnected from authenticating user root 34.91.0.68 port 48086 [preauth]
Nov 23 20:45:49 compute-1 python3.9[95249]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:45:49 compute-1 sudo[95247]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:49.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:49.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:50 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500044e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:51 compute-1 sshd-session[88588]: Connection closed by 192.168.122.30 port 52886
Nov 23 20:45:51 compute-1 sshd-session[88585]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:45:51 compute-1 systemd[1]: session-38.scope: Deactivated successfully.
Nov 23 20:45:51 compute-1 systemd[1]: session-38.scope: Consumed 1min 610ms CPU time.
Nov 23 20:45:51 compute-1 systemd-logind[793]: Session 38 logged out. Waiting for processes to exit.
Nov 23 20:45:51 compute-1 systemd-logind[793]: Removed session 38.
Nov 23 20:45:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540035f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:51.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938001090 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:51 compute-1 ceph-mon[80135]: pgmap v125: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 276 B/s rd, 92 B/s wr, 0 op/s
Nov 23 20:45:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:45:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:51.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:45:51 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:45:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:52 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:52 compute-1 sudo[95280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 20:45:52 compute-1 sudo[95280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:45:52 compute-1 sudo[95280]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:53 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500044e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:53.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:53 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:45:53 compute-1 ceph-mon[80135]: pgmap v126: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 276 B/s rd, 92 B/s wr, 0 op/s
Nov 23 20:45:53 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:45:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:53 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540035f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:53.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:54 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938001090 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:55 compute-1 ceph-mon[80135]: pgmap v127: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:45:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:45:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:55.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:45:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500044e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:45:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:55.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:45:56 compute-1 sshd-session[95307]: Accepted publickey for zuul from 192.168.122.30 port 55380 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 20:45:56 compute-1 systemd-logind[793]: New session 39 of user zuul.
Nov 23 20:45:56 compute-1 systemd[1]: Started Session 39 of User zuul.
Nov 23 20:45:56 compute-1 sshd-session[95307]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:45:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:56 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540035f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:56 compute-1 sudo[95363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:45:56 compute-1 sudo[95363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:45:56 compute-1 sudo[95363]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:56 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:45:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938001090 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:57 compute-1 python3.9[95485]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:45:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:45:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:57.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:45:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:45:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:57.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:45:57 compute-1 ceph-mon[80135]: pgmap v128: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:45:58 compute-1 sudo[95642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apdpqqpidlrbblmvvcrwyoyqjnuyotko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930758.213078-69-248480461933226/AnsiballZ_getent.py'
Nov 23 20:45:58 compute-1 sudo[95642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:45:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:58 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:58 compute-1 python3.9[95644]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 23 20:45:58 compute-1 sudo[95642]: pam_unix(sudo:session): session closed for user root
Nov 23 20:45:58 compute-1 ceph-mon[80135]: pgmap v129: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:45:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:59 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c001d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:59.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:59 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920000b60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:45:59 compute-1 sudo[95796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfitpxotyayhilkzrajlfztnafdtaaiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930759.3496532-105-252402372458183/AnsiballZ_setup.py'
Nov 23 20:45:59 compute-1 sudo[95796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:45:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:45:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:45:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:59.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:45:59 compute-1 python3.9[95798]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 20:46:00 compute-1 sudo[95796]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:00 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938001090 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:00 compute-1 sudo[95880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axzwintceoavzrnxvmbpmaokarmxcgsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930759.3496532-105-252402372458183/AnsiballZ_dnf.py'
Nov 23 20:46:00 compute-1 sudo[95880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:00 compute-1 python3.9[95882]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 23 20:46:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:01 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:01.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:01 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c001d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:01 compute-1 ceph-mon[80135]: pgmap v130: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:46:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:01.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:01 compute-1 sshd-session[95883]: Received disconnect from 102.176.81.29 port 50316:11: Bye Bye [preauth]
Nov 23 20:46:01 compute-1 sshd-session[95883]: Disconnected from authenticating user root 102.176.81.29 port 50316 [preauth]
Nov 23 20:46:01 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:46:02 compute-1 sudo[95880]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:02 compute-1 sshd-session[95886]: Invalid user teamspeak from 118.145.189.160 port 46098
Nov 23 20:46:02 compute-1 sshd-session[95886]: Received disconnect from 118.145.189.160 port 46098:11: Bye Bye [preauth]
Nov 23 20:46:02 compute-1 sshd-session[95886]: Disconnected from invalid user teamspeak 118.145.189.160 port 46098 [preauth]
Nov 23 20:46:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:02 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69200016a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:02 compute-1 sudo[96038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlfqytbjvvokhdjsploeselaecawmqwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930762.585257-147-260404830104892/AnsiballZ_dnf.py'
Nov 23 20:46:02 compute-1 sudo[96038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:03 compute-1 python3.9[96040]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 20:46:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:03 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:03.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:03 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:03 compute-1 ceph-mon[80135]: pgmap v131: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:46:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:46:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:03.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:04 compute-1 sudo[96038]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:04 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:05 compute-1 sudo[96192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzsxrmgkiftwdtqnbfxspufwsojkdafs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930764.6057127-171-37843929588447/AnsiballZ_systemd.py'
Nov 23 20:46:05 compute-1 sudo[96192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69200016a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:05 compute-1 python3.9[96194]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 20:46:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:05.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:05 compute-1 sudo[96192]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:05.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:05 compute-1 ceph-mon[80135]: pgmap v132: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 20:46:06 compute-1 python3.9[96348]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:46:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:06 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c001d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:06 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:46:07 compute-1 sudo[96498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpessyyehmwlugggtcvhkzzrxexskjnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930766.983071-225-172917427418540/AnsiballZ_sefcontext.py'
Nov 23 20:46:07 compute-1 sudo[96498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:07.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:07 compute-1 python3.9[96500]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 23 20:46:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69200016a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:07.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:07 compute-1 ceph-mon[80135]: pgmap v133: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:46:07 compute-1 sudo[96498]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:08 compute-1 python3.9[96651]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:46:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:09 compute-1 ceph-mon[80135]: pgmap v134: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:46:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c001d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:09.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:09.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:10 compute-1 sudo[96808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btknmdrxnepwvkvjubapqremtgsaxpld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930769.8911514-279-179422318496902/AnsiballZ_dnf.py'
Nov 23 20:46:10 compute-1 sudo[96808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:10 compute-1 python3.9[96810]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 20:46:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920002b10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920002b10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:11.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920002b10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:11 compute-1 ceph-mon[80135]: pgmap v135: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:46:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:11.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:11 compute-1 sudo[96808]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:11 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:46:12 compute-1 sudo[96964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhszchhibhvykekygmclnamsrybfzbof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930772.0359342-303-241152064966723/AnsiballZ_command.py'
Nov 23 20:46:12 compute-1 sudo[96964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:12 compute-1 python3.9[96966]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:46:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:13 compute-1 sudo[96964]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:13.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:13 compute-1 sshd-session[96889]: Invalid user user2 from 43.225.142.116 port 60696
Nov 23 20:46:13 compute-1 ceph-mon[80135]: pgmap v136: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:46:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:13.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:14 compute-1 sshd-session[96889]: Received disconnect from 43.225.142.116 port 60696:11: Bye Bye [preauth]
Nov 23 20:46:14 compute-1 sshd-session[96889]: Disconnected from invalid user user2 43.225.142.116 port 60696 [preauth]
Nov 23 20:46:14 compute-1 sudo[97252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcyrsfpegudyrsslencgvaxpfwbkncdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930773.959613-327-181347368481107/AnsiballZ_file.py'
Nov 23 20:46:14 compute-1 sudo[97252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:14 compute-1 python3.9[97254]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 20:46:14 compute-1 sudo[97252]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920002b10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:15.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:15 compute-1 python3.9[97404]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:46:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:46:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:15.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:46:15 compute-1 ceph-mon[80135]: pgmap v137: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 20:46:16 compute-1 sudo[97557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcdpjfpxoxgmzlyxsrgxaycjxgvmjnbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930775.8710132-375-251075339370107/AnsiballZ_dnf.py'
Nov 23 20:46:16 compute-1 sudo[97557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:16 compute-1 python3.9[97559]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 20:46:16 compute-1 sudo[97561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:46:16 compute-1 sudo[97561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:46:16 compute-1 sudo[97561]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:16 compute-1 ceph-mon[80135]: pgmap v138: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:46:16 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:46:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:17.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:17 compute-1 sudo[97557]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:46:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:17.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:46:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:46:18 compute-1 sudo[97736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeqblmdllrryhewwrkgnhzvugquqniar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930778.0227435-402-275268872379509/AnsiballZ_dnf.py'
Nov 23 20:46:18 compute-1 sudo[97736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:18 compute-1 python3.9[97738]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 20:46:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:19 compute-1 ceph-mon[80135]: pgmap v139: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:46:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:46:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:19.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:46:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540035f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:19.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:19 compute-1 sudo[97736]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:20 compute-1 sudo[97890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hveudtioopthgkkiutjtwtreiimpetwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930780.337415-438-19471406621017/AnsiballZ_stat.py'
Nov 23 20:46:20 compute-1 sudo[97890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:20 compute-1 python3.9[97892]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:46:20 compute-1 sudo[97890]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:21.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:21 compute-1 sudo[98045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewglphuyygmudwzwomelypjzsmzquxfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930781.1935737-462-171642404336023/AnsiballZ_slurp.py'
Nov 23 20:46:21 compute-1 sudo[98045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:21 compute-1 ceph-mon[80135]: pgmap v140: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:46:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:21.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:21 compute-1 python3.9[98047]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 23 20:46:21 compute-1 sudo[98045]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:21 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:46:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540035f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:23 compute-1 sshd-session[95310]: Connection closed by 192.168.122.30 port 55380
Nov 23 20:46:23 compute-1 sshd-session[95307]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:46:23 compute-1 systemd[1]: session-39.scope: Deactivated successfully.
Nov 23 20:46:23 compute-1 systemd[1]: session-39.scope: Consumed 17.613s CPU time.
Nov 23 20:46:23 compute-1 systemd-logind[793]: Session 39 logged out. Waiting for processes to exit.
Nov 23 20:46:23 compute-1 systemd-logind[793]: Removed session 39.
Nov 23 20:46:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:23.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:23 compute-1 ceph-mon[80135]: pgmap v141: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:46:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:23.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540035f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:46:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:25.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:46:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:25 compute-1 ceph-mon[80135]: pgmap v142: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 20:46:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:25.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:26 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:46:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:27.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540035f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:27.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:27 compute-1 ceph-mon[80135]: pgmap v143: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:46:28 compute-1 sshd-session[98076]: Accepted publickey for zuul from 192.168.122.30 port 33412 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 20:46:28 compute-1 systemd-logind[793]: New session 40 of user zuul.
Nov 23 20:46:28 compute-1 systemd[1]: Started Session 40 of User zuul.
Nov 23 20:46:28 compute-1 sshd-session[98076]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:46:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540035f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:29.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:29 compute-1 python3.9[98231]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:46:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6950002920 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:29.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:29 compute-1 ceph-mon[80135]: pgmap v144: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:46:30 compute-1 python3.9[98386]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 20:46:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:31 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:31 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:31.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:31 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:46:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:31.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:46:31 compute-1 ceph-mon[80135]: pgmap v145: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:46:31 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:46:32 compute-1 python3.9[98580]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:46:32 compute-1 sshd-session[98079]: Connection closed by 192.168.122.30 port 33412
Nov 23 20:46:32 compute-1 sshd-session[98076]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:46:32 compute-1 systemd[1]: session-40.scope: Deactivated successfully.
Nov 23 20:46:32 compute-1 systemd[1]: session-40.scope: Consumed 2.200s CPU time.
Nov 23 20:46:32 compute-1 systemd-logind[793]: Session 40 logged out. Waiting for processes to exit.
Nov 23 20:46:32 compute-1 systemd-logind[793]: Removed session 40.
Nov 23 20:46:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6950002920 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6950002920 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:46:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:33.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:46:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:46:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:33.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:46:33 compute-1 ceph-mon[80135]: pgmap v146: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:46:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:46:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:35 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:35 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6950002920 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:35.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:35 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:35.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:35 compute-1 ceph-mon[80135]: pgmap v147: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 20:46:36 compute-1 sudo[98608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:46:36 compute-1 sudo[98608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:46:36 compute-1 sudo[98608]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:46:37 compute-1 ceph-mon[80135]: pgmap v148: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:46:37 compute-1 sshd-session[98633]: Accepted publickey for zuul from 192.168.122.30 port 43756 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 20:46:37 compute-1 systemd-logind[793]: New session 41 of user zuul.
Nov 23 20:46:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:37 compute-1 systemd[1]: Started Session 41 of User zuul.
Nov 23 20:46:37 compute-1 sshd-session[98633]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:46:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:37.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6950002920 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:37.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:38 compute-1 python3.9[98787]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:46:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:39 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204639 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 20:46:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:39 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:39 compute-1 python3.9[98941]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:46:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:39.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:39 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:39 compute-1 ceph-mon[80135]: pgmap v149: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:46:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:39.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:40 compute-1 sudo[99096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggotwavndcdjzwzbdpoqnprkcqvmpufi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930800.0132003-81-198730051357004/AnsiballZ_setup.py'
Nov 23 20:46:40 compute-1 sudo[99096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:40 compute-1 python3.9[99098]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 20:46:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:41 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:41 compute-1 sudo[99096]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:41 compute-1 sudo[99180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prrozciengoozmrrrrvpfuftkhskfgza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930800.0132003-81-198730051357004/AnsiballZ_dnf.py'
Nov 23 20:46:41 compute-1 sudo[99180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:41 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:41.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:41 compute-1 sshd-session[71401]: Received disconnect from 38.102.83.13 port 58128:11: disconnected by user
Nov 23 20:46:41 compute-1 sshd-session[71401]: Disconnected from user zuul 38.102.83.13 port 58128
Nov 23 20:46:41 compute-1 python3.9[99182]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 20:46:41 compute-1 sshd-session[71398]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:46:41 compute-1 systemd[1]: session-19.scope: Deactivated successfully.
Nov 23 20:46:41 compute-1 systemd[1]: session-19.scope: Consumed 8.960s CPU time.
Nov 23 20:46:41 compute-1 systemd-logind[793]: Session 19 logged out. Waiting for processes to exit.
Nov 23 20:46:41 compute-1 systemd-logind[793]: Removed session 19.
Nov 23 20:46:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:41 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:41 compute-1 ceph-mon[80135]: pgmap v150: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:46:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:41.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:46:42 compute-1 sudo[99180]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:43 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6950002920 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:43 compute-1 sudo[99334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbaqcvydlgjojmgypxagvzfbtkradpec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930803.1240437-117-85835228345877/AnsiballZ_setup.py'
Nov 23 20:46:43 compute-1 sudo[99334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:43 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:43 compute-1 sshd-session[99337]: Invalid user solv from 161.35.133.66 port 40444
Nov 23 20:46:43 compute-1 sshd-session[99337]: Connection closed by invalid user solv 161.35.133.66 port 40444 [preauth]
Nov 23 20:46:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:43.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:43 compute-1 python3.9[99336]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 20:46:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:43 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:43 compute-1 ceph-mon[80135]: pgmap v151: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:46:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:46:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:43.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:46:43 compute-1 sudo[99334]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:45 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000bf80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:45 compute-1 sudo[99532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhebeunwllgptkyyxpnpxppizjibcnui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930804.7456367-150-112204519623776/AnsiballZ_file.py'
Nov 23 20:46:45 compute-1 sudo[99532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:45 compute-1 python3.9[99534]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:46:45 compute-1 sudo[99532]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:45 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:45.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:45 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:45 compute-1 ceph-mon[80135]: pgmap v152: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:46:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:46:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:45.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:46:46 compute-1 sudo[99685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmuhpkcunliffvwnufbjbquqnsrlqrvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930805.5897856-174-138428390267153/AnsiballZ_command.py'
Nov 23 20:46:46 compute-1 sudo[99685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:46 compute-1 python3.9[99687]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:46:46 compute-1 sudo[99685]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:47 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:47 compute-1 sudo[99850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfcluqgrjcwkcqfrhxsskjqharbnghhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930806.579075-198-219421714912130/AnsiballZ_stat.py'
Nov 23 20:46:47 compute-1 sudo[99850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:46:47 compute-1 python3.9[99852]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:46:47 compute-1 sudo[99850]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:47 compute-1 sudo[99929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntmeibhfcsgdfgxawohmungjirmmvcdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930806.579075-198-219421714912130/AnsiballZ_file.py'
Nov 23 20:46:47 compute-1 sudo[99929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:47 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:47.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:47 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 20:46:47 compute-1 python3.9[99931]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:46:47 compute-1 sudo[99929]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:47 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:47.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:47 compute-1 ceph-mon[80135]: pgmap v153: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:46:48 compute-1 sudo[100081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhxvkyjyoxxdxvfxhknijlebjgvqhjea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930808.088331-234-157403394636487/AnsiballZ_stat.py'
Nov 23 20:46:48 compute-1 sudo[100081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:48 compute-1 python3.9[100083]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:46:48 compute-1 sudo[100081]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:48 compute-1 sudo[100159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jechludccojijtafazzcyobtljdqofal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930808.088331-234-157403394636487/AnsiballZ_file.py'
Nov 23 20:46:48 compute-1 sudo[100159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:46:48 compute-1 ceph-mon[80135]: pgmap v154: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:46:49 compute-1 python3.9[100161]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:46:49 compute-1 sudo[100159]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540035f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540035f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:49.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:46:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:49.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:46:50 compute-1 sudo[100312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bejcqwagasiaphxfyotspitgqmtlwavk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930809.6938295-273-104579563186220/AnsiballZ_ini_file.py'
Nov 23 20:46:50 compute-1 sudo[100312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:50 compute-1 python3.9[100314]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:46:50 compute-1 sudo[100312]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:50 compute-1 sudo[100464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfgbwwejeoacbaqevdsknhaneatmiofx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930810.4171107-273-46499386759368/AnsiballZ_ini_file.py'
Nov 23 20:46:50 compute-1 sudo[100464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:50 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 20:46:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:50 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:46:50 compute-1 python3.9[100466]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:46:50 compute-1 sudo[100464]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:51 compute-1 sudo[100616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idjrgknddtzifemechbccbectkipdwfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930811.0554562-273-237112860453745/AnsiballZ_ini_file.py'
Nov 23 20:46:51 compute-1 sudo[100616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540035f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:51 compute-1 python3.9[100618]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:46:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:51.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:51 compute-1 sudo[100616]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:51.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:51 compute-1 ceph-mon[80135]: pgmap v155: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Nov 23 20:46:52 compute-1 sudo[100769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdbjjhjcwaahjxewthebihevwssrzmes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930811.7675796-273-91672680459812/AnsiballZ_ini_file.py'
Nov 23 20:46:52 compute-1 sudo[100769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:52 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:46:52 compute-1 python3.9[100771]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:46:52 compute-1 sudo[100769]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:52 compute-1 sudo[100796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:46:52 compute-1 sudo[100796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:46:52 compute-1 sudo[100796]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:53 compute-1 ceph-mon[80135]: pgmap v156: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Nov 23 20:46:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:53 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:53 compute-1 sudo[100844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 20:46:53 compute-1 sudo[100844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:46:53 compute-1 sudo[100984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyksyqmgphwcupfxnyeyjwiwwnrznjei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930812.9843938-366-204511534130267/AnsiballZ_dnf.py'
Nov 23 20:46:53 compute-1 sudo[100984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:53 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:53 compute-1 sudo[100844]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:53 compute-1 python3.9[100986]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 20:46:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:53.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:53 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540035f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:53 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 20:46:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:53.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:54 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:46:54 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 20:46:54 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:46:54 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:46:54 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 20:46:54 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 20:46:54 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:46:54 compute-1 sudo[100984]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:55 compute-1 ceph-mon[80135]: pgmap v157: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 20:46:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:55 compute-1 sudo[101156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzcoiotbjxualieyerdalvhsxtofsdgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930815.292715-399-186818207879524/AnsiballZ_setup.py'
Nov 23 20:46:55 compute-1 sudo[101156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:46:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:55.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:46:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:55.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:55 compute-1 python3.9[101158]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:46:55 compute-1 sudo[101156]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:56 compute-1 sudo[101310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iudpwirppfnidhrkecdtbgidhwpbardb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930816.2301273-423-177286758874639/AnsiballZ_stat.py'
Nov 23 20:46:56 compute-1 sudo[101310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:56 compute-1 python3.9[101312]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:46:56 compute-1 sudo[101310]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:57 compute-1 sudo[101339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:46:57 compute-1 sudo[101339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:46:57 compute-1 sudo[101339]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954004bd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:46:57 compute-1 sshd-session[101318]: Invalid user administrator from 34.91.0.68 port 50068
Nov 23 20:46:57 compute-1 sshd-session[101318]: Received disconnect from 34.91.0.68 port 50068:11: Bye Bye [preauth]
Nov 23 20:46:57 compute-1 sshd-session[101318]: Disconnected from invalid user administrator 34.91.0.68 port 50068 [preauth]
Nov 23 20:46:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:57 compute-1 sudo[101490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvrvltpbraurdtaqncsvupgcailukuaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930817.263136-450-112217313153210/AnsiballZ_stat.py'
Nov 23 20:46:57 compute-1 sudo[101490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:57.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:57 compute-1 ceph-mon[80135]: pgmap v158: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 20:46:57 compute-1 python3.9[101492]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:46:57 compute-1 sudo[101490]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:57.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:58 compute-1 sudo[101549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 20:46:58 compute-1 sudo[101549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:46:58 compute-1 sudo[101549]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:58 compute-1 sudo[101667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwbexvstuswrdqydmpyiopajflibkcpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930818.2077773-480-209297992875697/AnsiballZ_command.py'
Nov 23 20:46:58 compute-1 sudo[101667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:58 compute-1 python3.9[101669]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:46:58 compute-1 sudo[101667]: pam_unix(sudo:session): session closed for user root
Nov 23 20:46:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:59 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204659 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 20:46:59 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:46:59 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:46:59 compute-1 ceph-mon[80135]: pgmap v159: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 20:46:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:59 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954004bd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:59.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:59 compute-1 sudo[101821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwaxdlfdzwsoqmvegucfyrmodgujlcbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930819.196643-510-197820975221240/AnsiballZ_service_facts.py'
Nov 23 20:46:59 compute-1 sudo[101821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:46:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:59 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:46:59 compute-1 python3.9[101823]: ansible-service_facts Invoked
Nov 23 20:46:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:46:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:46:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:59.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:46:59 compute-1 network[101840]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 20:46:59 compute-1 network[101841]: 'network-scripts' will be removed from distribution in near future.
Nov 23 20:46:59 compute-1 network[101842]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 20:47:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:01 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:01 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:01.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:01 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954004bd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:01 compute-1 ceph-mon[80135]: pgmap v160: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 20:47:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:01.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:47:02 compute-1 sudo[101821]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:03 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003040 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:03 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:03.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:03 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:03 compute-1 ceph-mon[80135]: pgmap v161: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 20:47:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:47:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:03.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:05 compute-1 sudo[102128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnvcmmyxojbwsplowezmhkalkcclrrvw ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1763930824.7449899-555-28503655602191/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1763930824.7449899-555-28503655602191/args'
Nov 23 20:47:05 compute-1 sudo[102128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954004bd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:05 compute-1 sudo[102128]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003040 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:05.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:47:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:05.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:47:05 compute-1 ceph-mon[80135]: pgmap v162: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 23 20:47:06 compute-1 sudo[102296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stfsuvaydbhzbasvrgcjcdnuyeoblfuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930825.7215805-588-48422739694625/AnsiballZ_dnf.py'
Nov 23 20:47:06 compute-1 sudo[102296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:06 compute-1 python3.9[102298]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 20:47:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:47:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954004bd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:07 compute-1 sudo[102296]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:07.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003040 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:07.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:07 compute-1 ceph-mon[80135]: pgmap v163: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:47:08 compute-1 sudo[102450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyeyydmodxfxrldoyefxpmmoorlrvwop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930828.052515-627-208952003229703/AnsiballZ_package_facts.py'
Nov 23 20:47:08 compute-1 sudo[102450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:08 compute-1 python3.9[102452]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 23 20:47:08 compute-1 ceph-mon[80135]: pgmap v164: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:47:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:09 compute-1 sudo[102450]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:09.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954004bd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:09.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:10 compute-1 sudo[102603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfcbavfvcflvflarpplrsbhqtjvkkclw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930829.8820388-657-244682009507796/AnsiballZ_stat.py'
Nov 23 20:47:10 compute-1 sudo[102603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:10 compute-1 python3.9[102605]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:47:10 compute-1 sudo[102603]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:10 compute-1 sudo[102681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrnuoycrplnqbrshrhskpknjsjgmtahw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930829.8820388-657-244682009507796/AnsiballZ_file.py'
Nov 23 20:47:10 compute-1 sudo[102681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:10 compute-1 python3.9[102683]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:47:10 compute-1 sudo[102681]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:11.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954004bd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:11 compute-1 ceph-mon[80135]: pgmap v165: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:47:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:11.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:47:12 compute-1 sudo[102834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gclcakobpnjqntwqqufpbokwlzarywwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930831.887364-694-89627790344110/AnsiballZ_stat.py'
Nov 23 20:47:12 compute-1 sudo[102834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:12 compute-1 python3.9[102836]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:47:12 compute-1 sudo[102834]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:12 compute-1 sudo[102912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yldoysgplonwxaugfrmlyswaleilunym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930831.887364-694-89627790344110/AnsiballZ_file.py'
Nov 23 20:47:12 compute-1 sudo[102912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:12 compute-1 python3.9[102914]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:47:12 compute-1 sudo[102912]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:13 compute-1 sshd-session[102939]: Connection closed by authenticating user root 92.118.39.92 port 34244 [preauth]
Nov 23 20:47:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:13.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:13 compute-1 ceph-mon[80135]: pgmap v166: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:47:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:13.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:14 compute-1 sudo[103068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgcaljoidrnwxtwtrarfebdnsyjzvqps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930834.229148-748-77933575436239/AnsiballZ_lineinfile.py'
Nov 23 20:47:14 compute-1 sudo[103068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:14 compute-1 python3.9[103070]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:47:14 compute-1 sudo[103068]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954004bd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:15.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:15 compute-1 ceph-mon[80135]: pgmap v167: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 20:47:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:47:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:15.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:47:16 compute-1 sudo[103221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-augagspgkdgfmwogzmwwosojudhpaeti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930835.9785824-793-197181970129176/AnsiballZ_setup.py'
Nov 23 20:47:16 compute-1 sudo[103221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:16 compute-1 python3.9[103223]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 20:47:16 compute-1 sudo[103221]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:47:17 compute-1 sudo[103255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:47:17 compute-1 sudo[103255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:47:17 compute-1 sudo[103255]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:17 compute-1 sudo[103330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whxvvdrholktukrpfcnagizqvflobyju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930835.9785824-793-197181970129176/AnsiballZ_systemd.py'
Nov 23 20:47:17 compute-1 sudo[103330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:17 compute-1 python3.9[103332]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:47:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954004bd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:17 compute-1 sudo[103330]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:17.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:17 compute-1 ceph-mon[80135]: pgmap v168: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:47:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:47:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:17.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:47:18 compute-1 sshd-session[98637]: Connection closed by 192.168.122.30 port 43756
Nov 23 20:47:18 compute-1 sshd-session[98633]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:47:18 compute-1 systemd[1]: session-41.scope: Deactivated successfully.
Nov 23 20:47:18 compute-1 systemd[1]: session-41.scope: Consumed 22.594s CPU time.
Nov 23 20:47:18 compute-1 systemd-logind[793]: Session 41 logged out. Waiting for processes to exit.
Nov 23 20:47:18 compute-1 systemd-logind[793]: Removed session 41.
Nov 23 20:47:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:47:18 compute-1 sshd-session[103333]: Invalid user weblogic from 43.225.142.116 port 56822
Nov 23 20:47:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:19 compute-1 sshd-session[103333]: Received disconnect from 43.225.142.116 port 56822:11: Bye Bye [preauth]
Nov 23 20:47:19 compute-1 sshd-session[103333]: Disconnected from invalid user weblogic 43.225.142.116 port 56822 [preauth]
Nov 23 20:47:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:19.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:19 compute-1 ceph-mon[80135]: pgmap v169: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:47:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:19.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:21 compute-1 sshd-session[103364]: Invalid user yhli from 118.145.189.160 port 42320
Nov 23 20:47:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:47:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:21.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:47:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:21 compute-1 sshd-session[103364]: Received disconnect from 118.145.189.160 port 42320:11: Bye Bye [preauth]
Nov 23 20:47:21 compute-1 sshd-session[103364]: Disconnected from invalid user yhli 118.145.189.160 port 42320 [preauth]
Nov 23 20:47:21 compute-1 ceph-mon[80135]: pgmap v170: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:47:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:47:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:21.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:47:22 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:47:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:47:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:23.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:47:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004e50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:23.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:23 compute-1 ceph-mon[80135]: pgmap v171: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:47:24 compute-1 sshd-session[103368]: Accepted publickey for zuul from 192.168.122.30 port 42152 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 20:47:24 compute-1 systemd-logind[793]: New session 42 of user zuul.
Nov 23 20:47:24 compute-1 systemd[1]: Started Session 42 of User zuul.
Nov 23 20:47:24 compute-1 sshd-session[103368]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:47:24 compute-1 sudo[103521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqswqwcmhwevmruhizlgkxxdcbbffclb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930844.2754445-27-14177740921044/AnsiballZ_file.py'
Nov 23 20:47:24 compute-1 sudo[103521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:24 compute-1 ceph-mon[80135]: pgmap v172: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 20:47:24 compute-1 python3.9[103523]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:47:24 compute-1 sudo[103521]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:25.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:25 compute-1 sudo[103676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptboirrulvqknfoxcdopgouukyufnooj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930845.2897372-63-186114058667781/AnsiballZ_stat.py'
Nov 23 20:47:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:25 compute-1 sudo[103676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:25 compute-1 sshd-session[103524]: Invalid user server from 102.176.81.29 port 52780
Nov 23 20:47:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:25.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:25 compute-1 python3.9[103678]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:47:25 compute-1 sudo[103676]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:25 compute-1 sshd-session[103524]: Received disconnect from 102.176.81.29 port 52780:11: Bye Bye [preauth]
Nov 23 20:47:25 compute-1 sshd-session[103524]: Disconnected from invalid user server 102.176.81.29 port 52780 [preauth]
Nov 23 20:47:26 compute-1 sudo[103754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-milzkkfzkrvvbxmwoxkdprxeuicmoctw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930845.2897372-63-186114058667781/AnsiballZ_file.py'
Nov 23 20:47:26 compute-1 sudo[103754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:26 compute-1 python3.9[103756]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:47:26 compute-1 sudo[103754]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:26 compute-1 sshd-session[103371]: Connection closed by 192.168.122.30 port 42152
Nov 23 20:47:26 compute-1 sshd-session[103368]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:47:26 compute-1 systemd[1]: session-42.scope: Deactivated successfully.
Nov 23 20:47:26 compute-1 systemd[1]: session-42.scope: Consumed 1.431s CPU time.
Nov 23 20:47:26 compute-1 systemd-logind[793]: Session 42 logged out. Waiting for processes to exit.
Nov 23 20:47:26 compute-1 systemd-logind[793]: Removed session 42.
Nov 23 20:47:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004e50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:27 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:47:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:47:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:27.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:47:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:27 compute-1 ceph-mon[80135]: pgmap v173: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:47:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:27.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009600 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004e50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:29.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:29 compute-1 ceph-mon[80135]: pgmap v174: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:47:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:29.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:31 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:31 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009620 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:31.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:31 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004e50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:31 compute-1 ceph-mon[80135]: pgmap v175: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:47:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:31.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:32 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:47:32 compute-1 sshd-session[103784]: Accepted publickey for zuul from 192.168.122.30 port 52828 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 20:47:32 compute-1 systemd-logind[793]: New session 43 of user zuul.
Nov 23 20:47:32 compute-1 systemd[1]: Started Session 43 of User zuul.
Nov 23 20:47:32 compute-1 sshd-session[103784]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:47:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c004300 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:33 compute-1 python3.9[103937]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:47:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:47:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:33.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:47:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009640 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:33 compute-1 ceph-mon[80135]: pgmap v176: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:47:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:47:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:33.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:34 compute-1 sudo[104093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cckdrfkcqihfwwiglmavnynfjjvhavac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930853.896634-60-220566789251653/AnsiballZ_file.py'
Nov 23 20:47:34 compute-1 sudo[104093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:34 compute-1 python3.9[104095]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:47:34 compute-1 sudo[104093]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:35 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920002550 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:35 compute-1 sudo[104268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfjygwhifefzvpeaxfnpeqmdgfhbctdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930854.721847-84-256393478332004/AnsiballZ_stat.py'
Nov 23 20:47:35 compute-1 sudo[104268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:35 compute-1 python3.9[104270]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:47:35 compute-1 sudo[104268]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:35 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c004300 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:35 compute-1 sudo[104347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsmiuklkhgtfqwhgkpeqdtrvabxmqebh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930854.721847-84-256393478332004/AnsiballZ_file.py'
Nov 23 20:47:35 compute-1 sudo[104347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:35.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:35 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:35 compute-1 python3.9[104349]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.k4wo73f1 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:47:35 compute-1 sudo[104347]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:35 compute-1 ceph-mon[80135]: pgmap v177: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 20:47:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:35.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:36 compute-1 sudo[104499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zabsgtkgscyqxaviejjzmugaifuxkmuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930856.48612-144-196393444933387/AnsiballZ_stat.py'
Nov 23 20:47:36 compute-1 sudo[104499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:36 compute-1 ceph-mon[80135]: pgmap v178: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:47:37 compute-1 python3.9[104501]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:47:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009660 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:47:37 compute-1 sudo[104499]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:37 compute-1 sudo[104527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:47:37 compute-1 sudo[104527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:47:37 compute-1 sudo[104527]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:37 compute-1 sudo[104602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjchpbriszvjfrbvtdvrdujnxwvwtexd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930856.48612-144-196393444933387/AnsiballZ_file.py'
Nov 23 20:47:37 compute-1 sudo[104602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:37 compute-1 python3.9[104604]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.e4qcnt0b recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:47:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920002550 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:37 compute-1 sudo[104602]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:37.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c004300 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:37.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:38 compute-1 sudo[104755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbmmsyftfmutpegakwuscdwowtejszlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930857.931093-183-196120023880272/AnsiballZ_file.py'
Nov 23 20:47:38 compute-1 sudo[104755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:38 compute-1 python3.9[104757]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:47:38 compute-1 sudo[104755]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:38 compute-1 sudo[104907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blrrmtdnckyyqesdrdyhwwiplefgebzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930858.711566-207-267609542212535/AnsiballZ_stat.py'
Nov 23 20:47:38 compute-1 sudo[104907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:39 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:39 compute-1 python3.9[104909]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:47:39 compute-1 sudo[104907]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:39 compute-1 sudo[104985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkvxlsummrkcwuvvzafzrtrojlqblakd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930858.711566-207-267609542212535/AnsiballZ_file.py'
Nov 23 20:47:39 compute-1 sudo[104985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:39 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:39 compute-1 python3.9[104987]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:47:39 compute-1 sudo[104985]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:39.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:39 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920002550 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:39 compute-1 ceph-mon[80135]: pgmap v179: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:47:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:47:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:39.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:47:40 compute-1 sudo[105138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeuxzoptkgasmtehurbsjztxzlclpwuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930859.7576518-207-58800959576497/AnsiballZ_stat.py'
Nov 23 20:47:40 compute-1 sudo[105138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:40 compute-1 python3.9[105140]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:47:40 compute-1 sudo[105138]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:40 compute-1 sudo[105216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lelqrthoasnqxrhvbuypujriimxrkimt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930859.7576518-207-58800959576497/AnsiballZ_file.py'
Nov 23 20:47:40 compute-1 sudo[105216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:40 compute-1 python3.9[105218]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:47:40 compute-1 sudo[105216]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:41 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c004320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:41 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0096c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:41 compute-1 systemd[82658]: Created slice User Background Tasks Slice.
Nov 23 20:47:41 compute-1 systemd[82658]: Starting Cleanup of User's Temporary Files and Directories...
Nov 23 20:47:41 compute-1 systemd[82658]: Finished Cleanup of User's Temporary Files and Directories.
Nov 23 20:47:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:41.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:41 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:41 compute-1 sudo[105370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufalcakefeynuthkcumsqfizzhzbtfhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930861.448691-276-252429241971533/AnsiballZ_file.py'
Nov 23 20:47:41 compute-1 sudo[105370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:41 compute-1 ceph-mon[80135]: pgmap v180: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:47:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:41.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:41 compute-1 python3.9[105372]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:47:41 compute-1 sudo[105370]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:47:42 compute-1 sudo[105522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snfcqvdzyxwwrdrhqpcwrflostkxqouk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930862.3031003-300-75482855044554/AnsiballZ_stat.py'
Nov 23 20:47:42 compute-1 sudo[105522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:42 compute-1 python3.9[105524]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:47:42 compute-1 sudo[105522]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:43 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920002550 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:43 compute-1 sudo[105600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efnorsdonqbxrknrpwbhnhertwelemzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930862.3031003-300-75482855044554/AnsiballZ_file.py'
Nov 23 20:47:43 compute-1 sudo[105600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:43 compute-1 python3.9[105602]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:47:43 compute-1 sudo[105600]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:43 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c004340 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:43.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:43 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c004340 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:43 compute-1 ceph-mon[80135]: pgmap v181: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:47:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:47:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:43.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:47:43 compute-1 sudo[105753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxiezjwmefrgiuyoqyrlzyaixvnvxhie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930863.6347482-336-280588581013149/AnsiballZ_stat.py'
Nov 23 20:47:43 compute-1 sudo[105753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:44 compute-1 python3.9[105755]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:47:44 compute-1 sudo[105753]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:44 compute-1 sudo[105831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whzisyairhczhallptebmnvdqrcssunf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930863.6347482-336-280588581013149/AnsiballZ_file.py'
Nov 23 20:47:44 compute-1 sudo[105831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:44 compute-1 python3.9[105833]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:47:44 compute-1 sudo[105831]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:45 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:45 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920002550 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:45 compute-1 sudo[105984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzbtymiqikskgzhjvvuddvwpzmqjveys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930865.0071373-372-262975296442516/AnsiballZ_systemd.py'
Nov 23 20:47:45 compute-1 sudo[105984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:45.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:45 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c004340 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:45 compute-1 ceph-mon[80135]: pgmap v182: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 20:47:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:47:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:45.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:47:45 compute-1 python3.9[105986]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:47:45 compute-1 systemd[1]: Reloading.
Nov 23 20:47:46 compute-1 systemd-rc-local-generator[106013]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:47:46 compute-1 systemd-sysv-generator[106016]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:47:46 compute-1 sudo[105984]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:46 compute-1 sudo[106173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbjrsojihjpoqxqojrepovlskmizmhgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930866.623413-396-99806973281760/AnsiballZ_stat.py'
Nov 23 20:47:46 compute-1 sudo[106173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:47 compute-1 python3.9[106175]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:47:47 compute-1 sudo[106173]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:47 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c004340 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:47:47 compute-1 sudo[106251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znntyilxquwgeebekkkzrdawznvbkhhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930866.623413-396-99806973281760/AnsiballZ_file.py'
Nov 23 20:47:47 compute-1 sudo[106251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:47 compute-1 python3.9[106253]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:47:47 compute-1 sudo[106251]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:47 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:47.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:47 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:47 compute-1 ceph-mon[80135]: pgmap v183: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:47:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:47.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:48 compute-1 sudo[106404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uosyytqmpwvtntphjumftublkpgdyfmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930867.9761608-432-163050774568567/AnsiballZ_stat.py'
Nov 23 20:47:48 compute-1 sudo[106404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:48 compute-1 python3.9[106406]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:47:48 compute-1 sudo[106404]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:48 compute-1 sudo[106482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbkczqehtmoxjwrinyhljpnvhaxzbqnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930867.9761608-432-163050774568567/AnsiballZ_file.py'
Nov 23 20:47:48 compute-1 sudo[106482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:47:48 compute-1 python3.9[106484]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:47:48 compute-1 sudo[106482]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009740 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009740 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:49 compute-1 sudo[106635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-javgmanawuqhuxscgllzkegtxnfkjysx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930869.2418294-468-58758633479828/AnsiballZ_systemd.py'
Nov 23 20:47:49 compute-1 sudo[106635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:47:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:49.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:47:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:49 compute-1 python3.9[106637]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:47:49 compute-1 ceph-mon[80135]: pgmap v184: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:47:49 compute-1 systemd[1]: Reloading.
Nov 23 20:47:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:49.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:49 compute-1 systemd-rc-local-generator[106664]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:47:49 compute-1 systemd-sysv-generator[106669]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:47:50 compute-1 systemd[1]: Starting Create netns directory...
Nov 23 20:47:50 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 23 20:47:50 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 23 20:47:50 compute-1 systemd[1]: Finished Create netns directory.
Nov 23 20:47:50 compute-1 sudo[106635]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:51 compute-1 python3.9[106830]: ansible-ansible.builtin.service_facts Invoked
Nov 23 20:47:51 compute-1 network[106847]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 20:47:51 compute-1 network[106848]: 'network-scripts' will be removed from distribution in near future.
Nov 23 20:47:51 compute-1 network[106849]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 20:47:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009740 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:51.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:51 compute-1 ceph-mon[80135]: pgmap v185: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:47:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:47:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:51.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:47:52 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:47:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:53 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:53 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920004530 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:53.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:53 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:53 compute-1 ceph-mon[80135]: pgmap v186: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:47:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:53.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:55.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920004530 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:55 compute-1 ceph-mon[80135]: pgmap v187: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 20:47:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:55.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:56 compute-1 ceph-mon[80135]: pgmap v188: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:47:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009780 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:47:57 compute-1 sudo[106987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:47:57 compute-1 sudo[106987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:47:57 compute-1 sudo[106987]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:57.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000c760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:57.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:58 compute-1 sudo[107138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuodrodvumcdgkdtgdikfjxfcfjordph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930877.9895692-546-74726729850437/AnsiballZ_stat.py'
Nov 23 20:47:58 compute-1 sudo[107138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:58 compute-1 sudo[107141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:47:58 compute-1 sudo[107141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:47:58 compute-1 sudo[107141]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:58 compute-1 python3.9[107140]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:47:58 compute-1 sudo[107166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 20:47:58 compute-1 sudo[107138]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:58 compute-1 sudo[107166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:47:58 compute-1 sudo[107273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywpsddlhtodxgkqcxrssviihizflwcfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930877.9895692-546-74726729850437/AnsiballZ_file.py'
Nov 23 20:47:58 compute-1 sudo[107273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:58 compute-1 python3.9[107280]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:47:58 compute-1 sudo[107273]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:59 compute-1 sudo[107166]: pam_unix(sudo:session): session closed for user root
Nov 23 20:47:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:59 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920004530 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:59 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0097a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:59.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:59 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:47:59 compute-1 ceph-mon[80135]: pgmap v189: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:47:59 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:47:59 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 20:47:59 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:47:59 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:47:59 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 20:47:59 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 20:47:59 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:47:59 compute-1 sudo[107450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyuzrwtyifuwlvfkyaebntevjsuqpbis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930879.519508-585-18054100252725/AnsiballZ_file.py'
Nov 23 20:47:59 compute-1 sudo[107450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:47:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:47:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:47:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:59.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:47:59 compute-1 python3.9[107452]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:47:59 compute-1 sudo[107450]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:00 compute-1 sudo[107602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtomclatazmujhmrtmtbpnnflummxxfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930880.3870857-609-56724577328946/AnsiballZ_stat.py'
Nov 23 20:48:00 compute-1 sudo[107602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:00 compute-1 python3.9[107604]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:48:00 compute-1 sudo[107602]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:01 compute-1 sudo[107680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkhkiqlbbelavtnrgbcqlwvheclmdpgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930880.3870857-609-56724577328946/AnsiballZ_file.py'
Nov 23 20:48:01 compute-1 sudo[107680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:01 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000c760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:01 compute-1 python3.9[107682]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:48:01 compute-1 sudo[107680]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:01 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920004530 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:48:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:01.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:48:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:01 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0097c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:01 compute-1 ceph-mon[80135]: pgmap v190: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:48:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:01.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:01 compute-1 sshd-session[107707]: Invalid user smart from 34.91.0.68 port 52042
Nov 23 20:48:02 compute-1 sshd-session[107707]: Received disconnect from 34.91.0.68 port 52042:11: Bye Bye [preauth]
Nov 23 20:48:02 compute-1 sshd-session[107707]: Disconnected from invalid user smart 34.91.0.68 port 52042 [preauth]
Nov 23 20:48:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:48:02 compute-1 sudo[107835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgubgafktyadpogyqinjhudufsdsenev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930881.9700918-654-80074336997125/AnsiballZ_timezone.py'
Nov 23 20:48:02 compute-1 sudo[107835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:02 compute-1 python3.9[107837]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 23 20:48:02 compute-1 systemd[1]: Starting Time & Date Service...
Nov 23 20:48:02 compute-1 systemd[1]: Started Time & Date Service.
Nov 23 20:48:02 compute-1 sudo[107835]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:03 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:03 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000c760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:03 compute-1 sudo[107992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxkggyvvzvxwgazrmqxueanxhjylzubt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930883.332713-681-201446762404416/AnsiballZ_file.py'
Nov 23 20:48:03 compute-1 sudo[107992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:03 compute-1 sudo[107993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 20:48:03 compute-1 sudo[107993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:48:03 compute-1 sudo[107993]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:03.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:03 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920004530 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:03 compute-1 python3.9[108001]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:48:03 compute-1 sudo[107992]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:03 compute-1 ceph-mon[80135]: pgmap v191: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:48:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:48:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:48:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:48:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:03.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:04 compute-1 sudo[108169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuytydlchndfqerrvkvliveqxisuzknf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930884.1683059-705-194782830360703/AnsiballZ_stat.py'
Nov 23 20:48:04 compute-1 sudo[108169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:04 compute-1 python3.9[108171]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:48:04 compute-1 sudo[108169]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:04 compute-1 sudo[108247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edvwspvsowlsqmprjufpctgdnzvunwvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930884.1683059-705-194782830360703/AnsiballZ_file.py'
Nov 23 20:48:04 compute-1 sudo[108247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0097e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:05 compute-1 python3.9[108249]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:48:05 compute-1 sudo[108247]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:05.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938002600 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:05 compute-1 ceph-mon[80135]: pgmap v192: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 20:48:05 compute-1 sudo[108401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fktylrbquhvmxhaqvczyheeklxfyevtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930885.521532-741-269736172093644/AnsiballZ_stat.py'
Nov 23 20:48:05 compute-1 sudo[108401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:05.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:06 compute-1 python3.9[108403]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:48:06 compute-1 sudo[108401]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:06 compute-1 sudo[108479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwfojpbkrwnccoxwkeutpkeatrveelqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930885.521532-741-269736172093644/AnsiballZ_file.py'
Nov 23 20:48:06 compute-1 sudo[108479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:06 compute-1 python3.9[108481]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.6aanxriz recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:48:06 compute-1 sudo[108479]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:48:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000c760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:07 compute-1 sudo[108631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbkejruteqturfpdbsclltzrxsikbvgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930887.0534492-778-229630766002332/AnsiballZ_stat.py'
Nov 23 20:48:07 compute-1 sudo[108631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000c760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:07 compute-1 python3.9[108633]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:48:07 compute-1 sudo[108631]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:07.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:07 compute-1 ceph-mon[80135]: pgmap v193: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:48:07 compute-1 sudo[108710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlzwjiftzvkeukcmttfoavttnqzeoskn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930887.0534492-778-229630766002332/AnsiballZ_file.py'
Nov 23 20:48:07 compute-1 sudo[108710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:07.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:08 compute-1 python3.9[108712]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:48:08 compute-1 sudo[108710]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:08 compute-1 sudo[108862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqyefhbiybejtjakjdxxrzbhevopohcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930888.4178116-816-46837184537253/AnsiballZ_command.py'
Nov 23 20:48:08 compute-1 sudo[108862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:09 compute-1 python3.9[108864]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:48:09 compute-1 sudo[108862]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000c760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:09.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000c760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:09 compute-1 sudo[109016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjrxjyjtrtsikdkpehkgwpjnpvohlxmv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763930889.379372-840-156937123547825/AnsiballZ_edpm_nftables_from_files.py'
Nov 23 20:48:09 compute-1 sudo[109016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:09 compute-1 ceph-mon[80135]: pgmap v194: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:48:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:09.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:09 compute-1 python3[109018]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 23 20:48:10 compute-1 sudo[109016]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:10 compute-1 sudo[109168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvzegqtlgqrieoilvclgjewlrsjctlvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930890.373696-864-82665390844024/AnsiballZ_stat.py'
Nov 23 20:48:10 compute-1 sudo[109168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:10 compute-1 python3.9[109170]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:48:10 compute-1 sudo[109168]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:11 compute-1 sudo[109246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlsexphameohlaxalbxtnpbcqgrttnae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930890.373696-864-82665390844024/AnsiballZ_file.py'
Nov 23 20:48:11 compute-1 sudo[109246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:11 compute-1 python3.9[109248]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:48:11 compute-1 sudo[109246]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:11.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:11 compute-1 ceph-mon[80135]: pgmap v195: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:48:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:48:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:11.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:48:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:48:12 compute-1 sudo[109399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nckmqdxeoggwaligvrhdqaqkrvibmydc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930891.8127606-900-202724643665485/AnsiballZ_stat.py'
Nov 23 20:48:12 compute-1 sudo[109399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:12 compute-1 python3.9[109401]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:48:12 compute-1 sudo[109399]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:12 compute-1 sudo[109477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftvrrwcziooioctyrcwepjsnqtsgqprb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930891.8127606-900-202724643665485/AnsiballZ_file.py'
Nov 23 20:48:12 compute-1 sudo[109477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:12 compute-1 python3.9[109479]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:48:12 compute-1 sudo[109477]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000c760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:13 compute-1 sudo[109630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umygflwsgnkftawrghfiucgfothttnnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930893.2862892-936-107665600566492/AnsiballZ_stat.py'
Nov 23 20:48:13 compute-1 sudo[109630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:13.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938002600 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:13 compute-1 python3.9[109632]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:48:13 compute-1 sudo[109630]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:13 compute-1 ceph-mon[80135]: pgmap v196: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:48:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:13.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:14 compute-1 sudo[109708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpwckjcrzkkkftgtzgelxdpcoxjjqmxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930893.2862892-936-107665600566492/AnsiballZ_file.py'
Nov 23 20:48:14 compute-1 sudo[109708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:14 compute-1 python3.9[109710]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:48:14 compute-1 sudo[109708]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:14 compute-1 sudo[109860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqbcwddoqtzefmidfsysyglshsrbojwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930894.6982055-972-53081566674950/AnsiballZ_stat.py'
Nov 23 20:48:14 compute-1 sudo[109860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938002600 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:15 compute-1 python3.9[109862]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:48:15 compute-1 sudo[109860]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:15 compute-1 sudo[109938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfeuejwvtnyjjrrygnsghrndtyamzinm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930894.6982055-972-53081566674950/AnsiballZ_file.py'
Nov 23 20:48:15 compute-1 sudo[109938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000c760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:15 compute-1 python3.9[109941]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:48:15 compute-1 sudo[109938]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:15.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:15 compute-1 ceph-mon[80135]: pgmap v197: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 20:48:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:15.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:16 compute-1 sudo[110091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jchcnxphhjlhcynevyxujtusjkbyrglw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930896.1281362-1008-162891608499592/AnsiballZ_stat.py'
Nov 23 20:48:16 compute-1 sudo[110091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:16 compute-1 python3.9[110093]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:48:16 compute-1 sudo[110091]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:16 compute-1 ceph-mon[80135]: pgmap v198: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:48:17 compute-1 sudo[110169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tacupwxyltudtrufvqhtmjdoqykhtkfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930896.1281362-1008-162891608499592/AnsiballZ_file.py'
Nov 23 20:48:17 compute-1 sudo[110169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:48:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938002600 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:17 compute-1 python3.9[110171]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:48:17 compute-1 sudo[110169]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:17 compute-1 sudo[110196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:48:17 compute-1 sudo[110196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:48:17 compute-1 sudo[110196]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938002600 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:48:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:17.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:48:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000c760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:48:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:17.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:48:17 compute-1 sudo[110348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njqpzfqpoefljorhxmtxreeeijrbwtfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930897.6621182-1047-81701588193257/AnsiballZ_command.py'
Nov 23 20:48:17 compute-1 sudo[110348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:18 compute-1 python3.9[110350]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:48:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:48:18 compute-1 sudo[110348]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:19 compute-1 sudo[110503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lydxaunwqhvyfpadtirenydhpvnbbdrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930898.6124082-1071-210003205142542/AnsiballZ_blockinfile.py'
Nov 23 20:48:19 compute-1 sudo[110503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:19 compute-1 ceph-mon[80135]: pgmap v199: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:48:19 compute-1 python3.9[110505]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:48:19 compute-1 sudo[110503]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:19.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009900 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:19 compute-1 sudo[110656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guxlpneevukrrqchtcuctbenqzvdsbjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930899.592705-1098-191654266036957/AnsiballZ_file.py'
Nov 23 20:48:19 compute-1 sudo[110656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:48:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:19.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:48:20 compute-1 python3.9[110658]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:48:20 compute-1 sudo[110656]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:20 compute-1 sudo[110808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdwuxycuseomozughnvymrkfiaauukhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930900.3199124-1098-228572659290237/AnsiballZ_file.py'
Nov 23 20:48:20 compute-1 sudo[110808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:21 compute-1 python3.9[110810]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:48:21 compute-1 sudo[110808]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000c760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:21.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:21 compute-1 ceph-mon[80135]: pgmap v200: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:48:21 compute-1 sudo[110961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbkfppkfcntxgocbpurvemqbrzlaunrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930901.376794-1143-189255720908521/AnsiballZ_mount.py'
Nov 23 20:48:21 compute-1 sudo[110961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:21.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:22 compute-1 python3.9[110963]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 23 20:48:22 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:48:22 compute-1 sudo[110961]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:22 compute-1 sudo[111115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcmhpfkxlkxufjtkqnhyzrjasbluvjue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930902.266997-1143-230671553288829/AnsiballZ_mount.py'
Nov 23 20:48:22 compute-1 sudo[111115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:22 compute-1 python3.9[111117]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 23 20:48:22 compute-1 sudo[111115]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009920 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:23 compute-1 sshd-session[103787]: Connection closed by 192.168.122.30 port 52828
Nov 23 20:48:23 compute-1 sshd-session[103784]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:48:23 compute-1 systemd[1]: session-43.scope: Deactivated successfully.
Nov 23 20:48:23 compute-1 systemd[1]: session-43.scope: Consumed 29.733s CPU time.
Nov 23 20:48:23 compute-1 systemd-logind[793]: Session 43 logged out. Waiting for processes to exit.
Nov 23 20:48:23 compute-1 systemd-logind[793]: Removed session 43.
Nov 23 20:48:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000c760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:23.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:23 compute-1 ceph-mon[80135]: pgmap v201: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:48:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:23.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:24 compute-1 sshd-session[111087]: Received disconnect from 43.225.142.116 port 52956:11: Bye Bye [preauth]
Nov 23 20:48:24 compute-1 sshd-session[111087]: Disconnected from authenticating user root 43.225.142.116 port 52956 [preauth]
Nov 23 20:48:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009920 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:25.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009920 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:25 compute-1 ceph-mon[80135]: pgmap v202: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 20:48:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:25.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:27 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:48:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:27.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:27 compute-1 ceph-mon[80135]: pgmap v203: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:48:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:27.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:28 compute-1 sshd-session[111145]: Accepted publickey for zuul from 192.168.122.30 port 45190 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 20:48:28 compute-1 systemd-logind[793]: New session 44 of user zuul.
Nov 23 20:48:28 compute-1 systemd[1]: Started Session 44 of User zuul.
Nov 23 20:48:28 compute-1 sshd-session[111145]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:48:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000c760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:29 compute-1 sudo[111298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odrynjlbujoemvverrrlaxpfrqvmgypa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930908.7449405-19-214211873349249/AnsiballZ_tempfile.py'
Nov 23 20:48:29 compute-1 sudo[111298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:29 compute-1 python3.9[111300]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 23 20:48:29 compute-1 sudo[111298]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:29.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c00ad80 fd 49 proxy ignored for local
Nov 23 20:48:29 compute-1 kernel: ganesha.nfsd[95492]: segfault at 50 ip 00007f6a0584032e sp 00007f69d57f9210 error 4 in libntirpc.so.5.8[7f6a05825000+2c000] likely on CPU 5 (core 0, socket 5)
Nov 23 20:48:29 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 20:48:29 compute-1 systemd[1]: Created slice Slice /system/systemd-coredump.
Nov 23 20:48:29 compute-1 systemd[1]: Started Process Core Dump (PID 111326/UID 0).
Nov 23 20:48:29 compute-1 ceph-mon[80135]: pgmap v204: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:48:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:29.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:30 compute-1 sudo[111453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tspocrefxkckyjtagtunftgrprmfgeye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930909.9380288-55-79945766816751/AnsiballZ_stat.py'
Nov 23 20:48:30 compute-1 sudo[111453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:30 compute-1 python3.9[111455]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:48:30 compute-1 sudo[111453]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:30 compute-1 systemd-coredump[111327]: Process 85672 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 68:
                                                    #0  0x00007f6a0584032e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 23 20:48:31 compute-1 systemd[1]: systemd-coredump@0-111326-0.service: Deactivated successfully.
Nov 23 20:48:31 compute-1 systemd[1]: systemd-coredump@0-111326-0.service: Consumed 1.142s CPU time.
Nov 23 20:48:31 compute-1 podman[111531]: 2025-11-23 20:48:31.052822171 +0000 UTC m=+0.024876145 container died 466d10d0fad1c5a4f86b3f6ff6a62c2f5b4e27c7206b481850c43d696b989539 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 23 20:48:31 compute-1 systemd[1]: var-lib-containers-storage-overlay-b8322ba23651b391cd38f2980d80d3d4d5a77a2d7c68fccc64436bbb1b0ee305-merged.mount: Deactivated successfully.
Nov 23 20:48:31 compute-1 podman[111531]: 2025-11-23 20:48:31.089008259 +0000 UTC m=+0.061062183 container remove 466d10d0fad1c5a4f86b3f6ff6a62c2f5b4e27c7206b481850c43d696b989539 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 20:48:31 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Main process exited, code=exited, status=139/n/a
Nov 23 20:48:31 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Failed with result 'exit-code'.
Nov 23 20:48:31 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.990s CPU time.
Nov 23 20:48:31 compute-1 sudo[111657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyijsxtthsmxballqjjyrxalcgjpbkye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930910.983332-79-113354041767199/AnsiballZ_slurp.py'
Nov 23 20:48:31 compute-1 sudo[111657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:31 compute-1 python3.9[111659]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 23 20:48:31 compute-1 sudo[111657]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:31.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:31 compute-1 ceph-mon[80135]: pgmap v205: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:48:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:31.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:32 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:48:32 compute-1 sudo[111810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyyspotzevsarowzqhnpuxznitxtppkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930911.8714442-103-260712237537886/AnsiballZ_stat.py'
Nov 23 20:48:32 compute-1 sudo[111810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:32 compute-1 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 20:48:32 compute-1 python3.9[111812]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.1u9ifphu follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:48:32 compute-1 sudo[111810]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:32 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 23 20:48:32 compute-1 sudo[111938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lerlmcsevvudhncmslyczbawlljkelob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930911.8714442-103-260712237537886/AnsiballZ_copy.py'
Nov 23 20:48:32 compute-1 sudo[111938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:32 compute-1 ceph-mon[80135]: pgmap v206: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:48:33 compute-1 python3.9[111940]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.1u9ifphu mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930911.8714442-103-260712237537886/.source.1u9ifphu _original_basename=.ez6migu3 follow=False checksum=6cd7b37efcd593debc42fa9bb68a32d60f10fcfa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:48:33 compute-1 sudo[111938]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:33.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:48:33 compute-1 sudo[112091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqgahoisosawfemfvxqrbcwhboivdaua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930913.414065-148-4019779799946/AnsiballZ_setup.py'
Nov 23 20:48:33 compute-1 sudo[112091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:33.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:34 compute-1 python3.9[112093]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:48:34 compute-1 sudo[112091]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:34 compute-1 ceph-mon[80135]: pgmap v207: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:48:35 compute-1 sudo[112243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seebgwolatugnmimogtvikszxvaoblxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930914.6255689-173-163006565262120/AnsiballZ_blockinfile.py'
Nov 23 20:48:35 compute-1 sudo[112243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:35 compute-1 python3.9[112245]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZyfELJX7KkP8E4Yo+r9guKNy64TSJDfB+rBUAclCyKwGxjxhBTRAJJCOL6kSBIkbUub9LTNVh+s271jrKlK1rYs22c1DFe3ci9hBERauX4lIaBHw9kJBHURb9cB+VbonXf0hAdqGDLTXdqFnbed2oU0ngSuVesO/C9+SCSZFsfERuUe3/SXKbWfjehgYTi4GquXo6Ynq1HopME6mRR8qGsv6sgdkxpSaUiwtSBG5ONOSyzrev1t2hdDsRxvbZAZgV2ab6IMD9DTKaIXphHpumL6txas+nKViUfm+gW6p6EKNdHb/VLha7ghY3p4LE3OdXM4eytxszF0Fzs/0CXzafNxHjVjHzqxrJBi/PT22i6QD60NTimabHulw8IkZG6KsuNVq1rmlSSGQGjqAs7l6hNH8kF4uq1JwOl6mVgct5iE+ZzhfO5WRWShiE1LlCZpqdYE9VqmBrK5r70N0srW3h2mb4lTAwvC089Vert64D29M7riepyGCrGInpE4aK7Sk=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIFop+sR8mOkxOfCCMKg8Voa+6Ns0zHMRLKg+WdnL56v
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFQ0Rj0/OjRh0AQLkOX0VueFFf3xD5FqSzewSN/8R0Xh0Ybf7bkNUGszKaTkKSUBKR2e9V/GwA+BxEChWtzU3sY=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrfRiqah4FSYlin2mt3PYchMDfWNjxPXqcCCW7iymA93OXZ1reX9dxsJRSssuxIkwaYv7OC+wrUmMOsDhULhy9uNDku8TnHodZVNms8z3UwQW2GPePqEdQ56rKSJ5DhpY0ly7PapOQ69jitmBGQjsu8go19hV3djXlFm1du9V1HMnfGqyr5REZ5ACjW2Rr0108gdYgrt/xh+1sl7cgixK0vUKaqN47/VJHXSTk20aXknt5lhurSKMbRD4cgP1pz0lBJ8LfEvFajLlXBk7MtsI8L94qtHH20hWUk8P2FmqsM4LoLIY4YkAT6kzDPkNdC5F3bpl67NzNXKLdStChVsjRVgrsR0JhU4YO8nYPSqn85KWQUMsuQhXfeMPb5a0n4vSmF0hQhaTctIIK5Yq+qK3S5Ee0tV+ZLMcrYiRfVJYjULh+8LazeUYBtZAVkOoenlHNpcxfVl2v8Fx37PYu6wY/1Ol7i+Fyg+DMculPNu0E00hYIfuSPW06sm98V0zJ7bs=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC0+oolG6Djq6MTp/HXh3SEc2a8aDRu5q8AnCiNHx/fN
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC1GCZqvti/wHDh2Oo7NSAFToY/dykBAXL2bgJmg9kqKO2qTzfIYtCRiGP/x9yaw+D3ymaftMgdHgFkzRtYcXz0=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCo3+sqhh74Wal6wWv19BRNHNnjTPYKculYCUftHSfYmbg5LryLTnsWAJdalXVBYQIJtq5uFrJRBG4C0R1XMU/MT4ZxuTtafwAzeTnKoCHbN/+mH31bndpvGKYRQ9AQHmamquyDQaSEjIYKFaK6eM7uVV/PaSZqasrB6awv3MeDH/GhtlyJwY7ble8M3UtG9jMWuPq/qX+TnKCZI3COyKBCe7F3aeaIewsho+T7qsRd8UNr55SHWJ1N6xYtA4FUayJ4cCZUeo4+SOJuQWb6A3HZm75y0LpdLDFH54DqyDqKVvDUfaKJJQV++3GT9kF9+jrwJDEK9VslSlEylLZ0zg1J0Z2zyMOwOAxBKEUXQNymC+00ybwJd4trP7KDy6+ZGOtHEThBgVO6vtuxQLWhseNa3otNXh7cHTf+Jfo7uo1wHbasd6aD1AVxvt4yKgOGy1ypt9Ps/COlbfHHFYZsI5gVLyJyK8aeipUjJUe6u6Qlf/F/inV1rwRBg8li7oeW7Ss=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFE96kcIFDgsK09K4ZL9HihPRGUmf4YDgXlXqtYy0M8r
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJoWf98fFp9mmY0S22K7n+FjL7cDYCGLm8eglORId7ZBFp9PG5e8P+ws6VWjBbceNazmskqBYurrlrsvB4Mu40E=
                                              create=True mode=0644 path=/tmp/ansible.1u9ifphu state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:48:35 compute-1 sudo[112243]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:35.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204835 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 20:48:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:48:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:35.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:48:36 compute-1 sudo[112396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhjksvexslxnkdxjiccepaiklzxwtwlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930915.5408144-197-142896028139014/AnsiballZ_command.py'
Nov 23 20:48:36 compute-1 sudo[112396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:36 compute-1 python3.9[112398]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.1u9ifphu' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:48:36 compute-1 sudo[112396]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:37 compute-1 sudo[112555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egyssorvngywfhrlpgnajlbamvwfcpux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930917.0426679-221-78792014447967/AnsiballZ_file.py'
Nov 23 20:48:37 compute-1 sudo[112555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:37 compute-1 sudo[112549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:48:37 compute-1 sudo[112549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:48:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:48:37 compute-1 sudo[112549]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:37 compute-1 ceph-mon[80135]: pgmap v208: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:48:37 compute-1 python3.9[112576]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.1u9ifphu state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:48:37 compute-1 sudo[112555]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:48:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:37.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:48:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:48:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:37.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:48:38 compute-1 sshd-session[111148]: Connection closed by 192.168.122.30 port 45190
Nov 23 20:48:38 compute-1 sshd-session[111145]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:48:38 compute-1 systemd[1]: session-44.scope: Deactivated successfully.
Nov 23 20:48:38 compute-1 systemd[1]: session-44.scope: Consumed 5.142s CPU time.
Nov 23 20:48:38 compute-1 systemd-logind[793]: Session 44 logged out. Waiting for processes to exit.
Nov 23 20:48:38 compute-1 systemd-logind[793]: Removed session 44.
Nov 23 20:48:39 compute-1 sshd-session[112579]: Received disconnect from 118.145.189.160 port 53186:11: Bye Bye [preauth]
Nov 23 20:48:39 compute-1 sshd-session[112579]: Disconnected from authenticating user root 118.145.189.160 port 53186 [preauth]
Nov 23 20:48:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:39.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:39 compute-1 ceph-mon[80135]: pgmap v209: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:48:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:39.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:41 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Scheduled restart job, restart counter is at 1.
Nov 23 20:48:41 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 20:48:41 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.990s CPU time.
Nov 23 20:48:41 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 20:48:41 compute-1 podman[112654]: 2025-11-23 20:48:41.519931699 +0000 UTC m=+0.036703251 container create 9cce1bf66affa6ef4f347207d4a0ad972590fbbe226e35c4c7f83bf8a6579c22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 20:48:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36431d4c51e2d3482a2149cb2663510026d0fcb8438692ee02935721d35a5258/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 20:48:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36431d4c51e2d3482a2149cb2663510026d0fcb8438692ee02935721d35a5258/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 20:48:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36431d4c51e2d3482a2149cb2663510026d0fcb8438692ee02935721d35a5258/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 20:48:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36431d4c51e2d3482a2149cb2663510026d0fcb8438692ee02935721d35a5258/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.fuxuha-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 20:48:41 compute-1 podman[112654]: 2025-11-23 20:48:41.575946444 +0000 UTC m=+0.092718006 container init 9cce1bf66affa6ef4f347207d4a0ad972590fbbe226e35c4c7f83bf8a6579c22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 23 20:48:41 compute-1 podman[112654]: 2025-11-23 20:48:41.582638852 +0000 UTC m=+0.099410404 container start 9cce1bf66affa6ef4f347207d4a0ad972590fbbe226e35c4c7f83bf8a6579c22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 23 20:48:41 compute-1 bash[112654]: 9cce1bf66affa6ef4f347207d4a0ad972590fbbe226e35c4c7f83bf8a6579c22
Nov 23 20:48:41 compute-1 podman[112654]: 2025-11-23 20:48:41.502465481 +0000 UTC m=+0.019237063 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:48:41 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 20:48:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 20:48:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 20:48:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 20:48:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 20:48:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 20:48:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 20:48:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 20:48:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 20:48:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:41.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:41 compute-1 ceph-mon[80135]: pgmap v210: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:48:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:41.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:48:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:43.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:43 compute-1 sshd-session[112714]: Accepted publickey for zuul from 192.168.122.30 port 34244 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 20:48:43 compute-1 systemd-logind[793]: New session 45 of user zuul.
Nov 23 20:48:43 compute-1 systemd[1]: Started Session 45 of User zuul.
Nov 23 20:48:43 compute-1 sshd-session[112714]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:48:43 compute-1 ceph-mon[80135]: pgmap v211: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:48:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:48:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:43.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:48:44 compute-1 python3.9[112867]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:48:44 compute-1 ceph-mon[80135]: pgmap v212: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.430532) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930925430710, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2105, "num_deletes": 251, "total_data_size": 6065391, "memory_usage": 6152072, "flush_reason": "Manual Compaction"}
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930925450225, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 2473480, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10686, "largest_seqno": 12786, "table_properties": {"data_size": 2467201, "index_size": 3222, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15622, "raw_average_key_size": 20, "raw_value_size": 2453502, "raw_average_value_size": 3178, "num_data_blocks": 143, "num_entries": 772, "num_filter_entries": 772, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930734, "oldest_key_time": 1763930734, "file_creation_time": 1763930925, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 19729 microseconds, and 5682 cpu microseconds.
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.450267) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 2473480 bytes OK
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.450283) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.451660) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.451673) EVENT_LOG_v1 {"time_micros": 1763930925451669, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.451689) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 6056177, prev total WAL file size 6056177, number of live WAL files 2.
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.452828) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(2415KB)], [21(13MB)]
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930925452889, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 16381549, "oldest_snapshot_seqno": -1}
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 4415 keys, 14652851 bytes, temperature: kUnknown
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930925675498, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 14652851, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14619139, "index_size": 21570, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 111438, "raw_average_key_size": 25, "raw_value_size": 14534352, "raw_average_value_size": 3292, "num_data_blocks": 926, "num_entries": 4415, "num_filter_entries": 4415, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763930925, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.675715) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14652851 bytes
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.678426) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 73.6 rd, 65.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 13.3 +0.0 blob) out(14.0 +0.0 blob), read-write-amplify(12.5) write-amplify(5.9) OK, records in: 4848, records dropped: 433 output_compression: NoCompression
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.678444) EVENT_LOG_v1 {"time_micros": 1763930925678435, "job": 10, "event": "compaction_finished", "compaction_time_micros": 222677, "compaction_time_cpu_micros": 38694, "output_level": 6, "num_output_files": 1, "total_output_size": 14652851, "num_input_records": 4848, "num_output_records": 4415, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930925678884, "job": 10, "event": "table_file_deletion", "file_number": 23}
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930925681098, "job": 10, "event": "table_file_deletion", "file_number": 21}
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.452765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.681176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.681181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.681182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.681184) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:48:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.681185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:48:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:45.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:45 compute-1 sudo[113024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfibzzdyvdjlustwimtrueuaakcershu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930925.3858464-57-199618520532931/AnsiballZ_systemd.py'
Nov 23 20:48:45 compute-1 sudo[113024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:45.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:46 compute-1 python3.9[113026]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 23 20:48:46 compute-1 sudo[113024]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:46 compute-1 sshd-session[112923]: Invalid user cat from 102.176.81.29 port 55236
Nov 23 20:48:46 compute-1 sshd-session[112923]: Received disconnect from 102.176.81.29 port 55236:11: Bye Bye [preauth]
Nov 23 20:48:46 compute-1 sshd-session[112923]: Disconnected from invalid user cat 102.176.81.29 port 55236 [preauth]
Nov 23 20:48:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:48:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:47 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 20:48:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:47 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:48:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:47.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:47 compute-1 ceph-mon[80135]: pgmap v213: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:48:47 compute-1 sudo[113179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zigegzxxqinsiuqptujyowqsxfgykbrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930927.5102847-81-18728278076966/AnsiballZ_systemd.py'
Nov 23 20:48:47 compute-1 sudo[113179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:47.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:48 compute-1 python3.9[113181]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 20:48:48 compute-1 sudo[113179]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:48:49 compute-1 sudo[113332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwoyhgocstjesvausbzibxemuuzjndpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930928.6277812-108-94421202654055/AnsiballZ_command.py'
Nov 23 20:48:49 compute-1 sudo[113332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:49 compute-1 python3.9[113334]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:48:49 compute-1 sudo[113332]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:49.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:49 compute-1 ceph-mon[80135]: pgmap v214: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:48:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:50.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:50 compute-1 sudo[113486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpuxrvhiwyobgeybnllrzcsjbhycxpog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930929.589134-132-125538704598138/AnsiballZ_stat.py'
Nov 23 20:48:50 compute-1 sudo[113486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:50 compute-1 python3.9[113488]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:48:50 compute-1 sudo[113486]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:51 compute-1 sudo[113638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzkgxzbneshnhjhzwibjbfhtetmiwnrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930930.5764904-159-249721114952978/AnsiballZ_file.py'
Nov 23 20:48:51 compute-1 sudo[113638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:51 compute-1 python3.9[113640]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:48:51 compute-1 sudo[113638]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:51 compute-1 sshd-session[112717]: Connection closed by 192.168.122.30 port 34244
Nov 23 20:48:51 compute-1 sshd-session[112714]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:48:51 compute-1 systemd[1]: session-45.scope: Deactivated successfully.
Nov 23 20:48:51 compute-1 systemd[1]: session-45.scope: Consumed 3.859s CPU time.
Nov 23 20:48:51 compute-1 systemd-logind[793]: Session 45 logged out. Waiting for processes to exit.
Nov 23 20:48:51 compute-1 systemd-logind[793]: Removed session 45.
Nov 23 20:48:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:51.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:51 compute-1 ceph-mon[80135]: pgmap v215: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:48:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:48:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:52.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:48:52 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 20:48:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:53.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:53 compute-1 ceph-mon[80135]: pgmap v216: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:48:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:54.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:55 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:55 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:55.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204855 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 20:48:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:55 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:55 compute-1 ceph-mon[80135]: pgmap v217: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 20:48:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:56.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:56 compute-1 sshd-session[113684]: Accepted publickey for zuul from 192.168.122.30 port 60746 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 20:48:56 compute-1 systemd-logind[793]: New session 46 of user zuul.
Nov 23 20:48:56 compute-1 systemd[1]: Started Session 46 of User zuul.
Nov 23 20:48:56 compute-1 sshd-session[113684]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:48:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:57 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:48:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:57 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:57 compute-1 sudo[113788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:48:57 compute-1 sudo[113788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:48:57 compute-1 sudo[113788]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:57.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:57 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:57 compute-1 ceph-mon[80135]: pgmap v218: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:48:57 compute-1 python3.9[113863]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:48:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:58.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:58 compute-1 sudo[114017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixdlloirzgyplrlayapbkdcmfykoshoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930938.6601677-63-48961405574985/AnsiballZ_setup.py'
Nov 23 20:48:58 compute-1 sudo[114017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:48:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:59 compute-1 python3.9[114019]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 20:48:59 compute-1 sudo[114017]: pam_unix(sudo:session): session closed for user root
Nov 23 20:48:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:48:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:48:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:59.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:48:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:48:59 compute-1 ceph-mon[80135]: pgmap v219: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:48:59 compute-1 sudo[114102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uarzqxklzzqtljfzizwgaddklcajmjod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930938.6601677-63-48961405574985/AnsiballZ_dnf.py'
Nov 23 20:48:59 compute-1 sudo[114102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:00.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:00 compute-1 python3.9[114104]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 23 20:49:00 compute-1 ceph-mon[80135]: pgmap v220: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:49:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:01 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:01 compute-1 sudo[114102]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:01 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:01.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:01 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:02.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:02 compute-1 python3.9[114256]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:49:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:49:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:03 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:03 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:49:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:03.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:49:03 compute-1 ceph-mon[80135]: pgmap v221: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:49:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:49:03 compute-1 python3.9[114408]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 23 20:49:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:03 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:03 compute-1 sudo[114409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:49:03 compute-1 sudo[114409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:49:03 compute-1 sudo[114409]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:03 compute-1 sudo[114440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 20:49:03 compute-1 sudo[114440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:49:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:04.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:04 compute-1 sudo[114440]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:04 compute-1 python3.9[114640]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:49:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:49:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 20:49:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:49:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:49:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 20:49:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 20:49:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:49:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:05 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:05 compute-1 python3.9[114790]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:49:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:05 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:05.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:05 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:05 compute-1 ceph-mon[80135]: pgmap v222: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:49:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:06.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:06 compute-1 sshd-session[113687]: Connection closed by 192.168.122.30 port 60746
Nov 23 20:49:06 compute-1 sshd-session[113684]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:49:06 compute-1 systemd[1]: session-46.scope: Deactivated successfully.
Nov 23 20:49:06 compute-1 systemd[1]: session-46.scope: Consumed 5.617s CPU time.
Nov 23 20:49:06 compute-1 systemd-logind[793]: Session 46 logged out. Waiting for processes to exit.
Nov 23 20:49:06 compute-1 systemd-logind[793]: Removed session 46.
Nov 23 20:49:07 compute-1 sshd-session[114816]: Invalid user support from 34.91.0.68 port 54024
Nov 23 20:49:07 compute-1 sshd-session[114816]: Received disconnect from 34.91.0.68 port 54024:11: Bye Bye [preauth]
Nov 23 20:49:07 compute-1 sshd-session[114816]: Disconnected from invalid user support 34.91.0.68 port 54024 [preauth]
Nov 23 20:49:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204907 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 20:49:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:07 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:49:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:07 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:07.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:07 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:07 compute-1 ceph-mon[80135]: pgmap v223: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:49:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:49:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:08.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:49:09 compute-1 sudo[114819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 20:49:09 compute-1 sudo[114819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:49:09 compute-1 sudo[114819]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:09 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:09 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:09.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:09 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:09 compute-1 ceph-mon[80135]: pgmap v224: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:49:09 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:49:09 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:49:09 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Nov 23 20:49:09 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:09.845378) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 20:49:09 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Nov 23 20:49:09 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930949845446, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 511, "num_deletes": 251, "total_data_size": 771857, "memory_usage": 783000, "flush_reason": "Manual Compaction"}
Nov 23 20:49:09 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Nov 23 20:49:09 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930949854419, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 509525, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12791, "largest_seqno": 13297, "table_properties": {"data_size": 506826, "index_size": 735, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6430, "raw_average_key_size": 18, "raw_value_size": 501364, "raw_average_value_size": 1440, "num_data_blocks": 32, "num_entries": 348, "num_filter_entries": 348, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930926, "oldest_key_time": 1763930926, "file_creation_time": 1763930949, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 23 20:49:09 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 9058 microseconds, and 1829 cpu microseconds.
Nov 23 20:49:09 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 20:49:09 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:09.854449) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 509525 bytes OK
Nov 23 20:49:09 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:09.854463) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Nov 23 20:49:09 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:09.855556) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Nov 23 20:49:09 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:09.855576) EVENT_LOG_v1 {"time_micros": 1763930949855571, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 20:49:09 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:09.855594) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 20:49:09 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 768833, prev total WAL file size 768833, number of live WAL files 2.
Nov 23 20:49:09 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 20:49:09 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:09.856155) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 23 20:49:09 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 20:49:09 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(497KB)], [24(13MB)]
Nov 23 20:49:09 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930949856191, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 15162376, "oldest_snapshot_seqno": -1}
Nov 23 20:49:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:10.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:10 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 4248 keys, 13435255 bytes, temperature: kUnknown
Nov 23 20:49:10 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930950040699, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 13435255, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13404303, "index_size": 19258, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 108882, "raw_average_key_size": 25, "raw_value_size": 13324036, "raw_average_value_size": 3136, "num_data_blocks": 815, "num_entries": 4248, "num_filter_entries": 4248, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763930949, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Nov 23 20:49:10 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 20:49:10 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:10.040954) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 13435255 bytes
Nov 23 20:49:10 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:10.042238) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 82.1 rd, 72.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 14.0 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(56.1) write-amplify(26.4) OK, records in: 4763, records dropped: 515 output_compression: NoCompression
Nov 23 20:49:10 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:10.042253) EVENT_LOG_v1 {"time_micros": 1763930950042246, "job": 12, "event": "compaction_finished", "compaction_time_micros": 184597, "compaction_time_cpu_micros": 27093, "output_level": 6, "num_output_files": 1, "total_output_size": 13435255, "num_input_records": 4763, "num_output_records": 4248, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 20:49:10 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 20:49:10 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930950042405, "job": 12, "event": "table_file_deletion", "file_number": 26}
Nov 23 20:49:10 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 20:49:10 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930950045096, "job": 12, "event": "table_file_deletion", "file_number": 24}
Nov 23 20:49:10 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:09.856073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:49:10 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:10.045125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:49:10 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:10.045128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:49:10 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:10.045130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:49:10 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:10.045131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:49:10 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:10.045133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:49:11 compute-1 sshd-session[114845]: Accepted publickey for zuul from 192.168.122.30 port 45908 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 20:49:11 compute-1 systemd-logind[793]: New session 47 of user zuul.
Nov 23 20:49:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:11 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:11 compute-1 systemd[1]: Started Session 47 of User zuul.
Nov 23 20:49:11 compute-1 sshd-session[114845]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:49:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:11 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:49:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:11.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:49:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:11 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:11 compute-1 ceph-mon[80135]: pgmap v225: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:49:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:12.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:12 compute-1 python3.9[114999]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:49:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:49:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:13 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:13 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:49:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:13.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:49:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:13 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:13 compute-1 ceph-mon[80135]: pgmap v226: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:49:13 compute-1 sudo[115154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvczkwfxmxuojyqzycvsrybsplvbaydy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930953.475583-110-67380718019800/AnsiballZ_file.py'
Nov 23 20:49:13 compute-1 sudo[115154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:14.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:14 compute-1 python3.9[115156]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:49:14 compute-1 sudo[115154]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:14 compute-1 sudo[115306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eixtvzgagptpahchgqstfurbvjnetxzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930954.253747-110-6825055856980/AnsiballZ_file.py'
Nov 23 20:49:14 compute-1 sudo[115306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:14 compute-1 python3.9[115308]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:49:14 compute-1 sudo[115306]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:15 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:15 compute-1 sudo[115459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdcfvzzspgbyadkidsgvctlefkzfqqnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930955.1299405-155-157013120365244/AnsiballZ_stat.py'
Nov 23 20:49:15 compute-1 sudo[115459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:15 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:15 compute-1 python3.9[115461]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:49:15 compute-1 sudo[115459]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:49:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:15.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:49:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:15 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:15 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 20:49:15 compute-1 ceph-mon[80135]: pgmap v227: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:49:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:16.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:16 compute-1 sudo[115582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bullkbhvlgbgtwqhsiqylisyjpausufj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930955.1299405-155-157013120365244/AnsiballZ_copy.py'
Nov 23 20:49:16 compute-1 sudo[115582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:16 compute-1 python3.9[115584]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930955.1299405-155-157013120365244/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=c6695f794168bb06a68458e4c4302f75682e8d66 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:49:16 compute-1 sudo[115582]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:16 compute-1 sudo[115734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acqpwtiqnetvbynxzbvjndtzcnoofuld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930956.5145817-155-107003026360121/AnsiballZ_stat.py'
Nov 23 20:49:16 compute-1 sudo[115734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:16 compute-1 python3.9[115736]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:49:16 compute-1 sudo[115734]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:17 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:17 compute-1 sudo[115857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwvtaxdhznpihpkxcdqmxscnzkgujncd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930956.5145817-155-107003026360121/AnsiballZ_copy.py'
Nov 23 20:49:17 compute-1 sudo[115857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:17 compute-1 python3.9[115859]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930956.5145817-155-107003026360121/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=837e8dcdbcb3ca01e6b5360b86e6942411e1cc1f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:49:17 compute-1 sudo[115857]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:49:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:17 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:17 compute-1 sudo[115937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:49:17 compute-1 sudo[115937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:49:17 compute-1 sudo[115937]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:17 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:49:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:17.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:49:17 compute-1 sudo[116035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwtorjobfyucbhysebmlxirjhysxfnwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930957.5980859-155-269360365312086/AnsiballZ_stat.py'
Nov 23 20:49:17 compute-1 sudo[116035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:17 compute-1 ceph-mon[80135]: pgmap v228: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:49:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:49:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:18.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:49:18 compute-1 python3.9[116037]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:49:18 compute-1 sudo[116035]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:18 compute-1 sudo[116158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfzcbxhvasnsmhmojjycmfrvwvpjfkge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930957.5980859-155-269360365312086/AnsiballZ_copy.py'
Nov 23 20:49:18 compute-1 sudo[116158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:18 compute-1 python3.9[116160]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930957.5980859-155-269360365312086/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=b07f2c98942f4a42e88a4fd6c2dfd6797a26d65b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:49:18 compute-1 sudo[116158]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:18 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 20:49:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:18 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:49:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:49:18 compute-1 ceph-mon[80135]: pgmap v229: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:49:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:19 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:19 compute-1 sshd-session[116261]: Invalid user solv from 161.35.133.66 port 57522
Nov 23 20:49:19 compute-1 sshd-session[116261]: Connection closed by invalid user solv 161.35.133.66 port 57522 [preauth]
Nov 23 20:49:19 compute-1 sudo[116312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoxaiqoyltvyyzmlitgemuqrpvciubqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930958.863547-282-29061407236457/AnsiballZ_file.py'
Nov 23 20:49:19 compute-1 sudo[116312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:19 compute-1 python3.9[116314]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:49:19 compute-1 sudo[116312]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:19 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:19 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:19.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:19 compute-1 sudo[116465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewtgdpawgyxwshrjixvbqstlbbdlpzfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930959.730244-282-217961494645066/AnsiballZ_file.py'
Nov 23 20:49:19 compute-1 sudo[116465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:20.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:20 compute-1 python3.9[116467]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:49:20 compute-1 sudo[116465]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:20 compute-1 sudo[116617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckubwtlctpdcfbogechgbbifudiqxule ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930960.4042263-323-251517065325017/AnsiballZ_stat.py'
Nov 23 20:49:20 compute-1 sudo[116617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:20 compute-1 python3.9[116619]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:49:20 compute-1 sudo[116617]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:21 compute-1 sudo[116740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wonfovcfbmvyvqmvdzscpozhvvlzyrgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930960.4042263-323-251517065325017/AnsiballZ_copy.py'
Nov 23 20:49:21 compute-1 sudo[116740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:21 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:21 compute-1 python3.9[116742]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930960.4042263-323-251517065325017/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=5573c0dbfa105202cd0bc263e2740c0ee40f10d4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:49:21 compute-1 sudo[116740]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:21 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:21 compute-1 ceph-mon[80135]: pgmap v230: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:49:21 compute-1 sudo[116893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtwbnrvbyonlmlhqaavekcroqdlzbizl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930961.5185506-323-85067431312601/AnsiballZ_stat.py'
Nov 23 20:49:21 compute-1 sudo[116893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:21 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:49:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:21.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:49:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:21 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 20:49:21 compute-1 python3.9[116895]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:49:21 compute-1 sudo[116893]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:49:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:22.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:49:22 compute-1 sudo[117016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfazogphslbavpyvpzsqwshptifavqnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930961.5185506-323-85067431312601/AnsiballZ_copy.py'
Nov 23 20:49:22 compute-1 sudo[117016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:22 compute-1 python3.9[117018]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930961.5185506-323-85067431312601/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=26cfebde0335fa79ed2e9639d0ee86f73b64ddb4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:49:22 compute-1 sudo[117016]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:22 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:49:22 compute-1 sudo[117168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejjmnkqksmsuobpguqettzbqfwpifcag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930962.647661-323-50759445602601/AnsiballZ_stat.py'
Nov 23 20:49:22 compute-1 sudo[117168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:23 compute-1 python3.9[117170]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:49:23 compute-1 sudo[117168]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:23 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:23 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:23 compute-1 sudo[117292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmdiscxrvnhtyaqcggmzsboikrbizntp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930962.647661-323-50759445602601/AnsiballZ_copy.py'
Nov 23 20:49:23 compute-1 sudo[117292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:23 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:23 compute-1 ceph-mon[80135]: pgmap v231: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:49:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:23.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:23 compute-1 python3.9[117294]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930962.647661-323-50759445602601/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=da68e833cddbc2fb38a5a85f757ef73f04436e47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:49:23 compute-1 sudo[117292]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:24.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:24 compute-1 sudo[117444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttvtcvwnczivvfelkuednofekhkoncse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930964.158132-443-72201042513638/AnsiballZ_file.py'
Nov 23 20:49:24 compute-1 sudo[117444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:24 compute-1 python3.9[117446]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:49:24 compute-1 sudo[117444]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:25 compute-1 sudo[117596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aouzefqcubzybypnhzxicpssckfflsbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930964.838496-443-77369989778241/AnsiballZ_file.py'
Nov 23 20:49:25 compute-1 sudo[117596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:25 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24540016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:25 compute-1 python3.9[117598]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:49:25 compute-1 sudo[117596]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:25 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:25 compute-1 sudo[117750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgyqqxaozzjuapcamfhdiekalhyuntmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930965.4880726-484-54760953798451/AnsiballZ_stat.py'
Nov 23 20:49:25 compute-1 sudo[117750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:25 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:49:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:25.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:49:25 compute-1 ceph-mon[80135]: pgmap v232: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 20:49:25 compute-1 python3.9[117752]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:49:25 compute-1 sudo[117750]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:26.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:26 compute-1 sudo[117873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxzulhccblmgdjfidavgvmtsbpfhudbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930965.4880726-484-54760953798451/AnsiballZ_copy.py'
Nov 23 20:49:26 compute-1 sudo[117873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:26 compute-1 python3.9[117875]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930965.4880726-484-54760953798451/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=e255433b2b130bb49d47746bfb39bf4444637eba backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:49:26 compute-1 sudo[117873]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:26 compute-1 sudo[118025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktlroeaqbzaefgwpbibxunhmquspvppt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930966.7242484-484-40941470378252/AnsiballZ_stat.py'
Nov 23 20:49:26 compute-1 sudo[118025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:27 compute-1 python3.9[118027]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:49:27 compute-1 sudo[118025]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204927 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 20:49:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:27 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:27 compute-1 sudo[118149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqeqkbhhiuqqpxeomqgwbbadzrwjfirl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930966.7242484-484-40941470378252/AnsiballZ_copy.py'
Nov 23 20:49:27 compute-1 sudo[118149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:27 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:49:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:27 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:27 compute-1 python3.9[118151]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930966.7242484-484-40941470378252/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=26cfebde0335fa79ed2e9639d0ee86f73b64ddb4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:49:27 compute-1 sudo[118149]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:27 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.002000051s ======
Nov 23 20:49:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:27.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Nov 23 20:49:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:28.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:28 compute-1 sudo[118303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apgrzkdisawpcjsmdgsdszoqpauogcqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930967.8974254-484-97027044703044/AnsiballZ_stat.py'
Nov 23 20:49:28 compute-1 sudo[118303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:28 compute-1 ceph-mon[80135]: pgmap v233: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 20:49:28 compute-1 python3.9[118305]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:49:28 compute-1 sudo[118303]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:28 compute-1 sudo[118426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liungibhfgbfcdewxsnjhkzuxosqpcyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930967.8974254-484-97027044703044/AnsiballZ_copy.py'
Nov 23 20:49:28 compute-1 sudo[118426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:29 compute-1 python3.9[118428]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930967.8974254-484-97027044703044/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=dc8cfd8f437b6e825d312f4878d06173fdcec8c8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:49:29 compute-1 sudo[118426]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:29 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24480016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:29 compute-1 sshd-session[118199]: Invalid user kevin from 43.225.142.116 port 49088
Nov 23 20:49:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:29 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:29 compute-1 sshd-session[118199]: Received disconnect from 43.225.142.116 port 49088:11: Bye Bye [preauth]
Nov 23 20:49:29 compute-1 sshd-session[118199]: Disconnected from invalid user kevin 43.225.142.116 port 49088 [preauth]
Nov 23 20:49:29 compute-1 ceph-mon[80135]: pgmap v234: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 20:49:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:29 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:49:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:29.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:49:30 compute-1 sudo[118579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njpmoisvmdmdpvvfynzklkkasuwolhto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930969.774094-649-134126338099911/AnsiballZ_file.py'
Nov 23 20:49:30 compute-1 sudo[118579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:30.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:30 compute-1 python3.9[118581]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:49:30 compute-1 sudo[118579]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:30 compute-1 sudo[118731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzenhdixzmizaiwdlyzevxrfnvtvrmjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930970.4093323-672-257334711054345/AnsiballZ_stat.py'
Nov 23 20:49:30 compute-1 sudo[118731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:30 compute-1 python3.9[118733]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:49:30 compute-1 sudo[118731]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:31 compute-1 ceph-mon[80135]: pgmap v235: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 20:49:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:31 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:31 compute-1 sudo[118854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtszkghmcxtqbbkaoqqdilliwkukjiuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930970.4093323-672-257334711054345/AnsiballZ_copy.py'
Nov 23 20:49:31 compute-1 sudo[118854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:31 compute-1 python3.9[118856]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930970.4093323-672-257334711054345/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=848940549ac5db80ec615963c7c09743939a62fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:49:31 compute-1 sudo[118854]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:31 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24480016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:31 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:31.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:31 compute-1 sudo[119007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqqkrkdzmkkkvggyyyrnvocsmdfdrxdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930971.6403334-716-263338088893505/AnsiballZ_file.py'
Nov 23 20:49:31 compute-1 sudo[119007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:32.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:32 compute-1 python3.9[119009]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:49:32 compute-1 sudo[119007]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:32 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:49:32 compute-1 sudo[119159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eohdfbdfaascnodwewbvafqndsrhsmwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930972.3864365-748-51528642073617/AnsiballZ_stat.py'
Nov 23 20:49:32 compute-1 sudo[119159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:32 compute-1 python3.9[119161]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:49:32 compute-1 sudo[119159]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:33 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:33 compute-1 sudo[119282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alikmcvcepqungdwmvvkljigvvmfxblv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930972.3864365-748-51528642073617/AnsiballZ_copy.py'
Nov 23 20:49:33 compute-1 sudo[119282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:33 compute-1 python3.9[119284]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930972.3864365-748-51528642073617/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=848940549ac5db80ec615963c7c09743939a62fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:49:33 compute-1 sudo[119282]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:33 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:33 compute-1 ceph-mon[80135]: pgmap v236: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:49:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:49:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:33 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24480016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:49:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:33.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:49:33 compute-1 sudo[119435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jppgrmbqvecucdbdggslhvoteodhrlre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930973.7115235-795-256400319301911/AnsiballZ_file.py'
Nov 23 20:49:33 compute-1 sudo[119435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:34.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:34 compute-1 python3.9[119437]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:49:34 compute-1 sudo[119435]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:34 compute-1 sudo[119587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuofwnpuzxudyczwqknaheqpbkeunvng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930974.3081045-818-34005275327308/AnsiballZ_stat.py'
Nov 23 20:49:34 compute-1 sudo[119587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:34 compute-1 python3.9[119589]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:49:34 compute-1 sudo[119587]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:35 compute-1 sudo[119710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gycxrmhypwguinwdmlnhkjgruebhwxbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930974.3081045-818-34005275327308/AnsiballZ_copy.py'
Nov 23 20:49:35 compute-1 sudo[119710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:35 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:35 compute-1 python3.9[119712]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930974.3081045-818-34005275327308/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=848940549ac5db80ec615963c7c09743939a62fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:49:35 compute-1 sudo[119710]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:35 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:35 compute-1 ceph-mon[80135]: pgmap v237: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:49:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:35 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:49:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:35.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:49:35 compute-1 sudo[119863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfvaarxsjrfrkaguyjggkalzkkcicjsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930975.581422-862-158068833542531/AnsiballZ_file.py'
Nov 23 20:49:35 compute-1 sudo[119863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:36.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:36 compute-1 python3.9[119865]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:49:36 compute-1 sudo[119863]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:36 compute-1 sudo[120015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efnmoviouheqgxdhvmjfevwdkzlqqhpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930976.3147132-887-2070853817079/AnsiballZ_stat.py'
Nov 23 20:49:36 compute-1 sudo[120015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:36 compute-1 python3.9[120017]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:49:36 compute-1 sudo[120015]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:37 compute-1 sudo[120138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iutjiaovhbvpvtdrfyyipiqyapyimmvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930976.3147132-887-2070853817079/AnsiballZ_copy.py'
Nov 23 20:49:37 compute-1 sudo[120138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:37 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:37 compute-1 python3.9[120140]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930976.3147132-887-2070853817079/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=848940549ac5db80ec615963c7c09743939a62fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:49:37 compute-1 sudo[120138]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:49:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:37 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:37 compute-1 sudo[120266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:49:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:37 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:37 compute-1 sudo[120266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:49:37 compute-1 sudo[120266]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:37 compute-1 sudo[120314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxmoopheoxaachnhxaozezocrngtnrrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930977.5736618-932-233450892071096/AnsiballZ_file.py'
Nov 23 20:49:37 compute-1 sudo[120314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:37.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:37 compute-1 ceph-mon[80135]: pgmap v238: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:49:38 compute-1 python3.9[120318]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:49:38 compute-1 sudo[120314]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:38.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:38 compute-1 sudo[120468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtyxrmyxzropbwrmypcltyvfjzvztvls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930978.1828322-954-67957460539938/AnsiballZ_stat.py'
Nov 23 20:49:38 compute-1 sudo[120468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:38 compute-1 python3.9[120470]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:49:38 compute-1 sudo[120468]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:38 compute-1 ceph-mon[80135]: pgmap v239: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:49:39 compute-1 sudo[120591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjjorktggthbtidlbgrskqbkfckrkdsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930978.1828322-954-67957460539938/AnsiballZ_copy.py'
Nov 23 20:49:39 compute-1 sudo[120591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:39 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:39 compute-1 python3.9[120593]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930978.1828322-954-67957460539938/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=848940549ac5db80ec615963c7c09743939a62fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:49:39 compute-1 sudo[120591]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:39 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:39 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:39 compute-1 sudo[120744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igsstprwnpqjafeflcouzmhfziqrkrma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930979.5351863-998-87968501445075/AnsiballZ_file.py'
Nov 23 20:49:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:39 compute-1 sudo[120744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:49:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:39.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:49:40 compute-1 python3.9[120746]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:49:40 compute-1 sudo[120744]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:40.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:40 compute-1 sudo[120896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzqlqlrxlkuiknquubxyshbjyesouxli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930980.197119-1023-110698636821872/AnsiballZ_stat.py'
Nov 23 20:49:40 compute-1 sudo[120896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:40 compute-1 python3.9[120898]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:49:40 compute-1 sudo[120896]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:41 compute-1 sudo[121019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chnumpjzcrzpsbdojoxmcxyszvyyzqmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930980.197119-1023-110698636821872/AnsiballZ_copy.py'
Nov 23 20:49:41 compute-1 sudo[121019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:41 compute-1 python3.9[121021]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930980.197119-1023-110698636821872/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=848940549ac5db80ec615963c7c09743939a62fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:49:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24540032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:41 compute-1 sudo[121019]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:41.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:41 compute-1 ceph-mon[80135]: pgmap v240: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:49:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:42.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:49:42 compute-1 ceph-mon[80135]: pgmap v241: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:49:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:43 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:43 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24540032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:43 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:43.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:49:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:44.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:49:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:45 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:45 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:45 compute-1 ceph-mon[80135]: pgmap v242: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 20:49:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:45 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:49:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:45.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:49:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:46.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:47 compute-1 sshd-session[114848]: Connection closed by 192.168.122.30 port 45908
Nov 23 20:49:47 compute-1 sshd-session[114845]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:49:47 compute-1 systemd[1]: session-47.scope: Deactivated successfully.
Nov 23 20:49:47 compute-1 systemd[1]: session-47.scope: Consumed 21.852s CPU time.
Nov 23 20:49:47 compute-1 systemd-logind[793]: Session 47 logged out. Waiting for processes to exit.
Nov 23 20:49:47 compute-1 systemd-logind[793]: Removed session 47.
Nov 23 20:49:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:47 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:49:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:47 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:47 compute-1 ceph-mon[80135]: pgmap v243: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:49:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:47 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:47.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:48.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:49:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:49 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:49 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:49 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:49 compute-1 ceph-mon[80135]: pgmap v244: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:49:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:49.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:49:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:50.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:49:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204951 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 20:49:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:51 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:51 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:51 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:49:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:51.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:49:51 compute-1 ceph-mon[80135]: pgmap v245: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:49:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:52.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:52 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:49:52 compute-1 sshd-session[121053]: Accepted publickey for zuul from 192.168.122.30 port 37598 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 20:49:52 compute-1 systemd-logind[793]: New session 48 of user zuul.
Nov 23 20:49:52 compute-1 systemd[1]: Started Session 48 of User zuul.
Nov 23 20:49:52 compute-1 sshd-session[121053]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:49:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:53 compute-1 sudo[121207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrxlrpcnumzkqyzdylgbdndhjqgdfzkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930993.0888863-27-170476267244356/AnsiballZ_file.py'
Nov 23 20:49:53 compute-1 sudo[121207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:53.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:53 compute-1 python3.9[121209]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:49:53 compute-1 sudo[121207]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:53 compute-1 ceph-mon[80135]: pgmap v246: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:49:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:54.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:54 compute-1 sudo[121359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vagcdtkburftilknyfsrjxnlbczksssh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930994.0915732-63-267165231247700/AnsiballZ_stat.py'
Nov 23 20:49:54 compute-1 sudo[121359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:54 compute-1 python3.9[121361]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:49:54 compute-1 sudo[121359]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:54 compute-1 ceph-mon[80135]: pgmap v247: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:49:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:55 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:55 compute-1 sudo[121482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pntymetwueodszohzdcdrtmcmhtocbwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930994.0915732-63-267165231247700/AnsiballZ_copy.py'
Nov 23 20:49:55 compute-1 sudo[121482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:55 compute-1 python3.9[121484]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930994.0915732-63-267165231247700/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=756e8313f47ae598921d0392828cdc60f53012e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:49:55 compute-1 sudo[121482]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:55 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:55 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:55.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:56 compute-1 sudo[121635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khbmpxmnfyezqqkrjesdrfvdbnfrlcxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930995.696197-63-154300165805421/AnsiballZ_stat.py'
Nov 23 20:49:56 compute-1 sudo[121635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:49:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:56.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:49:56 compute-1 python3.9[121637]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:49:56 compute-1 sudo[121635]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:56 compute-1 sudo[121760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnmobjyvlpoqbqhysksneehlvrzpnmle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763930995.696197-63-154300165805421/AnsiballZ_copy.py'
Nov 23 20:49:56 compute-1 sudo[121760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:49:56 compute-1 python3.9[121762]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930995.696197-63-154300165805421/.source.conf _original_basename=ceph.conf follow=False checksum=d92b20e9a86369ec384ba170ca716bfc5aeaba51 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:49:56 compute-1 sudo[121760]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:57 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:57 compute-1 sshd-session[121056]: Connection closed by 192.168.122.30 port 37598
Nov 23 20:49:57 compute-1 sshd-session[121053]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:49:57 compute-1 systemd[1]: session-48.scope: Deactivated successfully.
Nov 23 20:49:57 compute-1 systemd[1]: session-48.scope: Consumed 2.768s CPU time.
Nov 23 20:49:57 compute-1 systemd-logind[793]: Session 48 logged out. Waiting for processes to exit.
Nov 23 20:49:57 compute-1 systemd-logind[793]: Removed session 48.
Nov 23 20:49:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:49:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:57 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:57 compute-1 ceph-mon[80135]: pgmap v248: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:49:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:57 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:49:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:57.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:49:57 compute-1 sudo[121789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:49:57 compute-1 sudo[121789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:49:57 compute-1 sudo[121789]: pam_unix(sudo:session): session closed for user root
Nov 23 20:49:57 compute-1 sshd-session[121711]: Received disconnect from 118.145.189.160 port 53288:11: Bye Bye [preauth]
Nov 23 20:49:57 compute-1 sshd-session[121711]: Disconnected from authenticating user root 118.145.189.160 port 53288 [preauth]
Nov 23 20:49:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:49:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:58.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:49:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:59 compute-1 ceph-mon[80135]: pgmap v249: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:49:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:49:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:49:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:49:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:59.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:49:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 20:50:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:50:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:00.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:50:00 compute-1 ceph-mon[80135]: overall HEALTH_OK
Nov 23 20:50:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:01 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:01 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003d20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:01 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:50:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:01.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:50:01 compute-1 ceph-mon[80135]: pgmap v250: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:50:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:02.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:50:02 compute-1 sshd-session[121816]: Accepted publickey for zuul from 192.168.122.30 port 56666 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 20:50:02 compute-1 systemd-logind[793]: New session 49 of user zuul.
Nov 23 20:50:02 compute-1 systemd[1]: Started Session 49 of User zuul.
Nov 23 20:50:02 compute-1 sshd-session[121816]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:50:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:02 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 20:50:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:02 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:50:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:02 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:50:02 compute-1 ceph-mon[80135]: pgmap v251: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:50:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:03 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:03 compute-1 python3.9[121969]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:50:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:03 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:03 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003d40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:03.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:04.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:50:04 compute-1 sudo[122124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lphtqnnszpefstjrahqsomcynhjljzqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931004.246298-63-220851016633789/AnsiballZ_file.py'
Nov 23 20:50:04 compute-1 sudo[122124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:04 compute-1 python3.9[122126]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:50:04 compute-1 sudo[122124]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:05 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:05 compute-1 sudo[122276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faitkgnfkxbumnqgxvamvvrbglrgxste ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931005.0203748-63-131213219956545/AnsiballZ_file.py'
Nov 23 20:50:05 compute-1 sudo[122276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:05 compute-1 ceph-mon[80135]: pgmap v252: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:50:05 compute-1 python3.9[122278]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:50:05 compute-1 sudo[122276]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:05 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:05 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:05.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:05 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 20:50:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:06.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:06 compute-1 python3.9[122429]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:50:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:07 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:07 compute-1 sudo[122581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doqxwwpxcjtuzgtnnmeqdqldbezmucor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931006.8677354-132-241988006713151/AnsiballZ_seboolean.py'
Nov 23 20:50:07 compute-1 sudo[122581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:07 compute-1 sshd-session[122430]: Invalid user sysadmin from 102.176.81.29 port 57758
Nov 23 20:50:07 compute-1 python3.9[122583]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 23 20:50:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:50:07 compute-1 sshd-session[122430]: Received disconnect from 102.176.81.29 port 57758:11: Bye Bye [preauth]
Nov 23 20:50:07 compute-1 sshd-session[122430]: Disconnected from invalid user sysadmin 102.176.81.29 port 57758 [preauth]
Nov 23 20:50:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:07 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:07 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:07.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:08 compute-1 ceph-mon[80135]: pgmap v253: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:50:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:50:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:08.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:50:08 compute-1 sshd-session[122585]: Invalid user tony from 34.91.0.68 port 55992
Nov 23 20:50:08 compute-1 sshd-session[122585]: Received disconnect from 34.91.0.68 port 55992:11: Bye Bye [preauth]
Nov 23 20:50:08 compute-1 sshd-session[122585]: Disconnected from invalid user tony 34.91.0.68 port 55992 [preauth]
Nov 23 20:50:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:09 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480009770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:09 compute-1 ceph-mon[80135]: pgmap v254: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:50:09 compute-1 sudo[122592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:50:09 compute-1 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 23 20:50:09 compute-1 sudo[122592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:50:09 compute-1 sudo[122592]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:09 compute-1 sudo[122581]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:09 compute-1 sudo[122617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Nov 23 20:50:09 compute-1 sudo[122617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:50:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:09 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:09 compute-1 sudo[122617]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:09 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:09 compute-1 sudo[122723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:50:09 compute-1 sudo[122723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:50:09 compute-1 sudo[122723]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:09.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:09 compute-1 sudo[122764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 20:50:09 compute-1 sudo[122764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:50:10 compute-1 sudo[122862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnxqzqdtbzejefwzufudbmdlreemargf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931009.7884548-162-50933724404530/AnsiballZ_setup.py'
Nov 23 20:50:10 compute-1 sudo[122862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:10.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:10 compute-1 python3.9[122864]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 20:50:10 compute-1 sudo[122764]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:10 compute-1 sudo[122862]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:10 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:50:10 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:50:10 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:50:10 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:50:10 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:50:10 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 20:50:10 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:50:10 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:50:10 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 20:50:10 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 20:50:10 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:50:10 compute-1 sudo[122978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egxtdotlmlerxlokqyaerejgtshafath ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931009.7884548-162-50933724404530/AnsiballZ_dnf.py'
Nov 23 20:50:10 compute-1 sudo[122978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:11 compute-1 python3.9[122980]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 20:50:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:11 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:11 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480009770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:11 compute-1 ceph-mon[80135]: pgmap v255: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 20:50:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:11 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:50:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:11.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:50:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:12.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:12 compute-1 sudo[122978]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:50:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205013 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 20:50:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:13 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:13 compute-1 sudo[123132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjclyliioupncdiyfhdweiutwhwlalpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931012.7449772-198-176761341198699/AnsiballZ_systemd.py'
Nov 23 20:50:13 compute-1 sudo[123132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:13 compute-1 python3.9[123134]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 20:50:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:13 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:13 compute-1 sudo[123132]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:13 compute-1 ceph-mon[80135]: pgmap v256: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:50:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:13 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480009770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:50:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:13.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:50:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:14.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:14 compute-1 sudo[123288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfzrwpwbfepnplewhtnknwiniozrozlc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763931014.049906-222-84763545787968/AnsiballZ_edpm_nftables_snippet.py'
Nov 23 20:50:14 compute-1 sudo[123288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:14 compute-1 python3[123290]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 23 20:50:14 compute-1 sudo[123288]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:15 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:15 compute-1 sudo[123440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkqnvetqamesrhtxljrzfcxhebuxssdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931015.0864983-249-112117873468341/AnsiballZ_file.py'
Nov 23 20:50:15 compute-1 sudo[123440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:15 compute-1 python3.9[123442]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:50:15 compute-1 sudo[123440]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:15 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:15 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:15.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:15 compute-1 ceph-mon[80135]: pgmap v257: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:50:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:16.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:16 compute-1 sudo[123595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogkzunawuokjmrysqrxrnrsihkllbsec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931015.949997-273-279090973998992/AnsiballZ_stat.py'
Nov 23 20:50:16 compute-1 sudo[123595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:16 compute-1 sudo[123598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 20:50:16 compute-1 sudo[123598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:50:16 compute-1 sudo[123598]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:16 compute-1 python3.9[123597]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:50:16 compute-1 sshd-session[123512]: Connection closed by authenticating user root 92.118.39.92 port 47784 [preauth]
Nov 23 20:50:16 compute-1 sudo[123595]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:16 compute-1 sudo[123698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsxisfbmeilkyphanenbbdzenwvtmjtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931015.949997-273-279090973998992/AnsiballZ_file.py'
Nov 23 20:50:16 compute-1 sudo[123698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:17 compute-1 python3.9[123700]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:50:17 compute-1 sudo[123698]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:17 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480009770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:17 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:50:17 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:50:17 compute-1 ceph-mon[80135]: pgmap v258: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:50:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:50:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:17 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:17 compute-1 sudo[123851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odpguvnubbnqimbnxkaxfuyxkjskmqgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931017.4334748-309-177715350180677/AnsiballZ_stat.py'
Nov 23 20:50:17 compute-1 sudo[123851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:17 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:17.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:17 compute-1 python3.9[123853]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:50:17 compute-1 sudo[123851]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:18 compute-1 sudo[123856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:50:18 compute-1 sudo[123856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:50:18 compute-1 sudo[123856]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:50:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:18.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:50:18 compute-1 sudo[123954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epgxpsybnsobxdlnlnxlnitphobicogi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931017.4334748-309-177715350180677/AnsiballZ_file.py'
Nov 23 20:50:18 compute-1 sudo[123954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:50:18 compute-1 python3.9[123956]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.x68dw7a8 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:50:18 compute-1 sudo[123954]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:19 compute-1 sudo[124106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qinsvyftqurlipnxsqrbshiflwyyziag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931018.787659-345-166598608835870/AnsiballZ_stat.py'
Nov 23 20:50:19 compute-1 sudo[124106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:19 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:19 compute-1 python3.9[124108]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:50:19 compute-1 sudo[124106]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:19 compute-1 ceph-mon[80135]: pgmap v259: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:50:19 compute-1 sudo[124185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uykgknyvxugxhqjswxcgjzcmfgjtguzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931018.787659-345-166598608835870/AnsiballZ_file.py'
Nov 23 20:50:19 compute-1 sudo[124185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:19 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480009770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:19 compute-1 python3.9[124187]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:50:19 compute-1 sudo[124185]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:19 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:19.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:20.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:20 compute-1 sudo[124337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koebfkqwfkxrzfvidaqoyyzxccldzcec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931020.098787-384-128799917312973/AnsiballZ_command.py'
Nov 23 20:50:20 compute-1 sudo[124337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:20 compute-1 python3.9[124339]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:50:20 compute-1 sudo[124337]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:21 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:21 compute-1 sudo[124491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyxztzjjkbxwfydytlmeblgxhnshtzjd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763931021.0482557-408-126460579122553/AnsiballZ_edpm_nftables_from_files.py'
Nov 23 20:50:21 compute-1 sudo[124491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:21 compute-1 python3[124494]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 23 20:50:21 compute-1 sudo[124491]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:21 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:21 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:21.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:21 compute-1 ceph-mon[80135]: pgmap v260: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:50:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:50:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:22.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:50:22 compute-1 sudo[124644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtyiolgmhaipqfhtylgvrwirxhgsuhyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931021.9635844-432-12731109785803/AnsiballZ_stat.py'
Nov 23 20:50:22 compute-1 sudo[124644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:22 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:50:22 compute-1 python3.9[124646]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:50:22 compute-1 sudo[124644]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:23 compute-1 ceph-mon[80135]: pgmap v261: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:50:23 compute-1 sudo[124769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vltqwdpqjzpxswjyqhlmjatedpjuzodv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931021.9635844-432-12731109785803/AnsiballZ_copy.py'
Nov 23 20:50:23 compute-1 sudo[124769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:23 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:23 compute-1 python3.9[124771]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931021.9635844-432-12731109785803/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:50:23 compute-1 sudo[124769]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:23 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:23 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:50:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:23.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:50:23 compute-1 sudo[124922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sshhewqtrcrdgoutgshgnzyjnjughdlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931023.5975683-477-266356286050086/AnsiballZ_stat.py'
Nov 23 20:50:23 compute-1 sudo[124922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:24 compute-1 python3.9[124924]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:50:24 compute-1 sudo[124922]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:24.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:24 compute-1 sudo[125047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afljmhugdhhplpxjfdkatwmznhjfcspm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931023.5975683-477-266356286050086/AnsiballZ_copy.py'
Nov 23 20:50:24 compute-1 sudo[125047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:24 compute-1 python3.9[125049]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931023.5975683-477-266356286050086/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:50:24 compute-1 sudo[125047]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:25 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:25 compute-1 sudo[125200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvarajfpmdaqjbwhekihuvcopryzvtaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931025.238438-522-198293900914498/AnsiballZ_stat.py'
Nov 23 20:50:25 compute-1 sudo[125200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:25 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:25 compute-1 python3.9[125202]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:50:25 compute-1 sudo[125200]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:25 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:25 compute-1 ceph-mon[80135]: pgmap v262: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:50:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:25.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:26 compute-1 sudo[125325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuehqvjbsuulxgomtatcvnkdnofaubjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931025.238438-522-198293900914498/AnsiballZ_copy.py'
Nov 23 20:50:26 compute-1 sudo[125325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:26.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:26 compute-1 python3.9[125327]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931025.238438-522-198293900914498/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:50:26 compute-1 sudo[125325]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:27 compute-1 sudo[125477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdsbqhadrgfjamcvpjvsfaprlgornysx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931026.8137777-567-189985884401335/AnsiballZ_stat.py'
Nov 23 20:50:27 compute-1 sudo[125477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:27 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:27 compute-1 python3.9[125479]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:50:27 compute-1 sudo[125477]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:27 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:50:27 compute-1 sudo[125603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxirzdinkcnxowkqqxgbkjsepohvcidb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931026.8137777-567-189985884401335/AnsiballZ_copy.py'
Nov 23 20:50:27 compute-1 sudo[125603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:27 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:27 compute-1 python3.9[125605]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931026.8137777-567-189985884401335/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:50:27 compute-1 sudo[125603]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:27 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:50:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:27.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:50:27 compute-1 ceph-mon[80135]: pgmap v263: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:50:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:50:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:28.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:50:28 compute-1 sudo[125755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekryzknasyrbvnthkeryymfwduleweoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931028.3467958-612-47770686636605/AnsiballZ_stat.py'
Nov 23 20:50:28 compute-1 sudo[125755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:28 compute-1 python3.9[125757]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:50:29 compute-1 sudo[125755]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:29 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:29 compute-1 sudo[125881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxehbbuybmaprydgmgeoucqqqwogxxse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931028.3467958-612-47770686636605/AnsiballZ_copy.py'
Nov 23 20:50:29 compute-1 sudo[125881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:29 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:29 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24680014b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:29 compute-1 ceph-mon[80135]: pgmap v264: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:50:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:29.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:29 compute-1 python3.9[125883]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931028.3467958-612-47770686636605/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:50:29 compute-1 sudo[125881]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:30.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:30 compute-1 sudo[126035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykmdnqikkjphzwzrxudoqkzykyyxxuvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931030.1886668-657-135662157109128/AnsiballZ_file.py'
Nov 23 20:50:30 compute-1 sudo[126035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:30 compute-1 python3.9[126037]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:50:30 compute-1 sudo[126035]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:31 compute-1 ceph-mon[80135]: pgmap v265: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 20:50:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:31 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:31 compute-1 sudo[126187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqksibniewwgbkddczehuscqyupgfrfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931031.0231318-681-176731711329238/AnsiballZ_command.py'
Nov 23 20:50:31 compute-1 sudo[126187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:31 compute-1 python3.9[126189]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:50:31 compute-1 sudo[126187]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:31 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:31 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:50:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:31.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:50:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:32.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:32 compute-1 sudo[126343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soaaxjlmbqbptnzbsaleclthyepfbvct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931031.8506875-705-66972283836350/AnsiballZ_blockinfile.py'
Nov 23 20:50:32 compute-1 sudo[126343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:32 compute-1 python3.9[126345]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:50:32 compute-1 sudo[126343]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:32 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:50:33 compute-1 sudo[126495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imarzjtcsialygcjnylainulxgurcezl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931032.8653686-732-264241993902822/AnsiballZ_command.py'
Nov 23 20:50:33 compute-1 sudo[126495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:33 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24680014b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:33 compute-1 python3.9[126497]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:50:33 compute-1 sudo[126495]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:33 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:33 compute-1 ceph-mon[80135]: pgmap v266: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:50:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:50:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:33 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:50:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:33.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:50:33 compute-1 sudo[126649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chfkmabbfuwgmkntmrjheqkfxuwhcrnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931033.7507906-756-243258316028378/AnsiballZ_stat.py'
Nov 23 20:50:33 compute-1 sudo[126649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:34.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:34 compute-1 python3.9[126651]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:50:34 compute-1 sudo[126649]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:34 compute-1 sudo[126803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjvezqrolosjoaymzbmrtkfbpkehueuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931034.5863245-780-155305320736971/AnsiballZ_command.py'
Nov 23 20:50:34 compute-1 sudo[126803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:35 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 20:50:35 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 8328 writes, 34K keys, 8328 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 8328 writes, 1694 syncs, 4.92 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8328 writes, 34K keys, 8328 commit groups, 1.0 writes per commit group, ingest: 21.45 MB, 0.04 MB/s
                                           Interval WAL: 8328 writes, 1694 syncs, 4.92 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5580590769b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5580590769b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5580590769b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 23 20:50:35 compute-1 python3.9[126805]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:50:35 compute-1 sudo[126803]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:35 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:35 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24680014b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:35 compute-1 sudo[126959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpqvhdpktwjzcnrrxqmmjgtmywaqmghc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931035.5012665-804-274719949862305/AnsiballZ_file.py'
Nov 23 20:50:35 compute-1 sudo[126959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:35 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:35.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:36 compute-1 ceph-mon[80135]: pgmap v267: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:50:36 compute-1 python3.9[126961]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:50:36 compute-1 sudo[126959]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:50:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:36.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:50:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:37 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:37 compute-1 python3.9[127111]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:50:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:50:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:37 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:37 compute-1 ceph-mon[80135]: pgmap v268: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:50:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:37 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24680014b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:38.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:50:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:38.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:50:38 compute-1 sudo[127213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:50:38 compute-1 sudo[127213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:50:38 compute-1 sudo[127213]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:38 compute-1 sudo[127288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saeoxkxaihqtmqwfjfpiykfqogjgowzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931038.3235478-924-154147093522425/AnsiballZ_command.py'
Nov 23 20:50:38 compute-1 sudo[127288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:39 compute-1 python3.9[127290]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:93:45:69:49" external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:50:39 compute-1 ovs-vsctl[127292]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:93:45:69:49 external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 23 20:50:39 compute-1 sudo[127288]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:39 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:39 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:39 compute-1 sudo[127443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqgrdalxhnnlvvcksknpertfyyneutkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931039.4247725-951-271651045312919/AnsiballZ_command.py'
Nov 23 20:50:39 compute-1 sudo[127443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:39 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:39 compute-1 python3.9[127445]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:50:39 compute-1 sudo[127443]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:50:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:40.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:50:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:40.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:40 compute-1 ceph-mon[80135]: pgmap v269: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:50:40 compute-1 sudo[127598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsrdrsbjnxxgsopwpzfofqfqftfbsuxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931040.4761806-975-243424959310020/AnsiballZ_command.py'
Nov 23 20:50:40 compute-1 sudo[127598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:40 compute-1 python3.9[127600]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:50:40 compute-1 ovs-vsctl[127601]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 23 20:50:41 compute-1 sudo[127598]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24680014b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:41 compute-1 python3.9[127752]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:50:41 compute-1 ceph-mon[80135]: pgmap v270: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 20:50:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 20:50:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:42.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 20:50:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:42.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:50:42 compute-1 sudo[127904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsuuacneudebkqtolovbyxkcecrlaroq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931042.350248-1026-178795575734489/AnsiballZ_file.py'
Nov 23 20:50:42 compute-1 sudo[127904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:42 compute-1 python3.9[127906]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:50:42 compute-1 sudo[127904]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:43 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:43 compute-1 sudo[128057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyodeddoqrrkbdtjemhxvwakauifritb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931043.2515514-1050-5425275750822/AnsiballZ_stat.py'
Nov 23 20:50:43 compute-1 sudo[128057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:43 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24680038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:43 compute-1 python3.9[128059]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:50:43 compute-1 sudo[128057]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:43 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:44 compute-1 sudo[128135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umxgzsjnvlwuzcqayaayqbxvzkeiizoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931043.2515514-1050-5425275750822/AnsiballZ_file.py'
Nov 23 20:50:44 compute-1 sudo[128135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:44.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:44 compute-1 python3.9[128137]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:50:44 compute-1 sudo[128135]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:44 compute-1 ceph-mon[80135]: pgmap v271: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:50:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:44.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:44 compute-1 sudo[128287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khjbvlhkpvntqaqstdfnenmbsdzlenhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931044.3978043-1050-137685720985197/AnsiballZ_stat.py'
Nov 23 20:50:44 compute-1 sudo[128287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:44 compute-1 python3.9[128289]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:50:44 compute-1 sudo[128287]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:45 compute-1 sudo[128365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szeqhljalkrwalceufytvbgcndwqftjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931044.3978043-1050-137685720985197/AnsiballZ_file.py'
Nov 23 20:50:45 compute-1 sudo[128365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:45 compute-1 python3.9[128367]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:50:45 compute-1 sudo[128365]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:45 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:45 compute-1 ceph-mon[80135]: pgmap v272: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:50:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:45 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:45 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24680038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:46.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:46 compute-1 sudo[128518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqgggkjvluhfjkwwfocdfsyfbtwumwtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931045.9715338-1120-17303506592003/AnsiballZ_file.py'
Nov 23 20:50:46 compute-1 sudo[128518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:46.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:46 compute-1 python3.9[128520]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:50:46 compute-1 sudo[128518]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:47 compute-1 sudo[128670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuyzmwlwdqicdnlsyzkrirymxbljalnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931046.8165002-1143-15215518117463/AnsiballZ_stat.py'
Nov 23 20:50:47 compute-1 sudo[128670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:47 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:47 compute-1 python3.9[128672]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:50:47 compute-1 sudo[128670]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:50:47 compute-1 sudo[128749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxoaobfmbxwtmydwwewsqfmjfonykyrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931046.8165002-1143-15215518117463/AnsiballZ_file.py'
Nov 23 20:50:47 compute-1 sudo[128749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:47 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:47 compute-1 python3.9[128751]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:50:47 compute-1 sudo[128749]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:47 compute-1 ceph-mon[80135]: pgmap v273: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:50:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:47 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:48.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:48.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:48 compute-1 sudo[128901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmymohkabgbsidadawaxeudqzyguaffq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931048.217903-1179-228957317284607/AnsiballZ_stat.py'
Nov 23 20:50:48 compute-1 sudo[128901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:48 compute-1 python3.9[128903]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:50:48 compute-1 sudo[128901]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:48 compute-1 sudo[128979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfydogowyszcswskctpelnzpurrxsxmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931048.217903-1179-228957317284607/AnsiballZ_file.py'
Nov 23 20:50:48 compute-1 sudo[128979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:49 compute-1 python3.9[128981]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:50:49 compute-1 sudo[128979]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:49 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:50:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:49 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24680038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:49 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:49 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:49 compute-1 sudo[129132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyxbodwodueaqwvqyzzzppbfgnreqmtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931049.663501-1215-85167610844982/AnsiballZ_systemd.py'
Nov 23 20:50:49 compute-1 sudo[129132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:50:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:50.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:50:50 compute-1 ceph-mon[80135]: pgmap v274: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:50:50 compute-1 python3.9[129134]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:50:50 compute-1 systemd[1]: Reloading.
Nov 23 20:50:50 compute-1 systemd-rc-local-generator[129164]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:50:50 compute-1 systemd-sysv-generator[129167]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:50:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:50.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:50 compute-1 sudo[129132]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:51 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:51 compute-1 sudo[129322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqmawkihuqqgsmoormeggalmwhqjzwwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931051.003327-1239-171188136306338/AnsiballZ_stat.py'
Nov 23 20:50:51 compute-1 sudo[129322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:51 compute-1 python3.9[129324]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:50:51 compute-1 sudo[129322]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:51 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24680038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:51 compute-1 sudo[129401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lymffurhumtcptknsbjeavtdiusbdoqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931051.003327-1239-171188136306338/AnsiballZ_file.py'
Nov 23 20:50:51 compute-1 sudo[129401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:51 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:52 compute-1 python3.9[129403]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:50:52 compute-1 sudo[129401]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:52.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:52 compute-1 ceph-mon[80135]: pgmap v275: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 20:50:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:50:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:52.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:50:52 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:50:52 compute-1 sudo[129553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oecxrzxwznqkfovjlgalcrjzfbkljesb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931052.4387565-1275-254578991080096/AnsiballZ_stat.py'
Nov 23 20:50:52 compute-1 sudo[129553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:52 compute-1 python3.9[129555]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:50:52 compute-1 sudo[129553]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:53 compute-1 sudo[129631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aewaotwkjktithxdbvddouyezufcjebp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931052.4387565-1275-254578991080096/AnsiballZ_file.py'
Nov 23 20:50:53 compute-1 sudo[129631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:53 compute-1 python3.9[129633]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:50:53 compute-1 sudo[129631]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:53 compute-1 ceph-mon[80135]: pgmap v276: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:50:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:54 compute-1 sudo[129785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdabzhckviobdzemjunlnkwxzyicgptj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931053.8440301-1311-264942652870599/AnsiballZ_systemd.py'
Nov 23 20:50:54 compute-1 sudo[129785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:50:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:54.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:50:54 compute-1 python3.9[129787]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:50:54 compute-1 systemd[1]: Reloading.
Nov 23 20:50:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:54.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:54 compute-1 systemd-rc-local-generator[129813]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:50:54 compute-1 systemd-sysv-generator[129816]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:50:54 compute-1 systemd[1]: Starting Create netns directory...
Nov 23 20:50:54 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 23 20:50:54 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 23 20:50:54 compute-1 systemd[1]: Finished Create netns directory.
Nov 23 20:50:54 compute-1 sudo[129785]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:55 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:55 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:55 compute-1 sudo[129982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkhshorpcybudhcqcdzbiwcvguvmkqlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931055.4763296-1341-266341816420838/AnsiballZ_file.py'
Nov 23 20:50:55 compute-1 sudo[129982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:55 compute-1 ceph-mon[80135]: pgmap v277: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:50:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:55 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:55 compute-1 python3.9[129984]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:50:55 compute-1 sudo[129982]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:56.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:56.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:56 compute-1 sudo[130134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vplnkghnfatndxiovzqvmkdngynavryp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931056.2854304-1365-222984569254372/AnsiballZ_stat.py'
Nov 23 20:50:56 compute-1 sudo[130134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:56 compute-1 python3.9[130136]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:50:56 compute-1 sudo[130134]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:57 compute-1 sudo[130257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvatmpgnbnykfywjtjnrkxmhwyldernz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931056.2854304-1365-222984569254372/AnsiballZ_copy.py'
Nov 23 20:50:57 compute-1 sudo[130257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:57 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:57 compute-1 python3.9[130259]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931056.2854304-1365-222984569254372/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:50:57 compute-1 sudo[130257]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:50:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:57 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:57 compute-1 ceph-mon[80135]: pgmap v278: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:50:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:57 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:58.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:58 compute-1 sudo[130410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmcsfizeuxhbdkhvzmegkojgefdaipzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931058.061927-1416-261955342179436/AnsiballZ_file.py'
Nov 23 20:50:58 compute-1 sudo[130410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:50:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:50:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:58.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:50:58 compute-1 python3.9[130412]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:50:58 compute-1 sudo[130410]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:58 compute-1 sudo[130413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:50:58 compute-1 sudo[130413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:50:58 compute-1 sudo[130413]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:59 compute-1 sudo[130587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koylaibxrdbwgpldbjcemjbjzizgyfqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931058.85419-1440-142060403416495/AnsiballZ_stat.py'
Nov 23 20:50:59 compute-1 sudo[130587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:59 compute-1 python3.9[130589]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:50:59 compute-1 sudo[130587]: pam_unix(sudo:session): session closed for user root
Nov 23 20:50:59 compute-1 sudo[130711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkwvnfksbgideawhhcelsczxwadulmfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931058.85419-1440-142060403416495/AnsiballZ_copy.py'
Nov 23 20:50:59 compute-1 sudo[130711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:50:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:50:59 compute-1 python3.9[130713]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931058.85419-1440-142060403416495/.source.json _original_basename=.dtpuaz6g follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:50:59 compute-1 sudo[130711]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:00 compute-1 ceph-mon[80135]: pgmap v279: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:51:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:00.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:00.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:00 compute-1 sudo[130864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwztzhqpymmxsnepekvrdbzwxzymovtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931060.422097-1485-141679220514268/AnsiballZ_file.py'
Nov 23 20:51:00 compute-1 sudo[130864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:51:00 compute-1 python3.9[130866]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:51:00 compute-1 sudo[130864]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:01 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480002100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:01 compute-1 sudo[131017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvgukbuqpmsrwowumxwczmgywazalqqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931061.233101-1509-59259713736269/AnsiballZ_stat.py'
Nov 23 20:51:01 compute-1 sudo[131017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:51:01 compute-1 sudo[131017]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:01 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740043d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:01 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:02 compute-1 sudo[131140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cavfjrpzkwyjeiggsisgencnuontjbfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931061.233101-1509-59259713736269/AnsiballZ_copy.py'
Nov 23 20:51:02 compute-1 sudo[131140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:51:02 compute-1 ceph-mon[80135]: pgmap v280: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 20:51:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:51:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:02.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:51:02 compute-1 sudo[131140]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:02.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:51:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:51:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:03 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:03 compute-1 sudo[131292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpkasjgtbyodvxhjimuzlmcgswxmdpda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931062.9353817-1560-211402158511506/AnsiballZ_container_config_data.py'
Nov 23 20:51:03 compute-1 sudo[131292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:51:03 compute-1 python3.9[131294]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 23 20:51:03 compute-1 sudo[131292]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:03 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:03 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740043f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:04.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:04 compute-1 ceph-mon[80135]: pgmap v281: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:51:04 compute-1 sudo[131445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ounvoxlkbyogiioooeojeakwkuvdulxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931063.9637227-1587-194685470879308/AnsiballZ_container_config_hash.py'
Nov 23 20:51:04 compute-1 sudo[131445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:51:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:04.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:04 compute-1 python3.9[131447]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 20:51:04 compute-1 sudo[131445]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:05 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:05 compute-1 sudo[131598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wydgljqrpvogrisbnlytdloqpwsnntvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931065.034128-1614-69112705565123/AnsiballZ_podman_container_info.py'
Nov 23 20:51:05 compute-1 sudo[131598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:51:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:05 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480002100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:05 compute-1 python3.9[131600]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 23 20:51:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:05 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:06 compute-1 sudo[131598]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:06 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 20:51:06 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2470 writes, 14K keys, 2470 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.06 MB/s
                                           Cumulative WAL: 2470 writes, 2470 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2470 writes, 14K keys, 2470 commit groups, 1.0 writes per commit group, ingest: 38.81 MB, 0.06 MB/s
                                           Interval WAL: 2470 writes, 2470 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     50.5      0.43              0.05         6    0.071       0      0       0.0       0.0
                                             L6      1/0   12.81 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   2.9     80.0     70.3      0.90              0.16         5    0.180     21K   2261       0.0       0.0
                                            Sum      1/0   12.81 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.9     54.3     64.0      1.33              0.21        11    0.120     21K   2261       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.9     54.4     64.1      1.32              0.21        10    0.132     21K   2261       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   0.0     80.0     70.3      0.90              0.16         5    0.180     21K   2261       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     50.7      0.42              0.05         5    0.085       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.021, interval 0.021
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 1.3 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 1.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560649e57350#2 capacity: 304.00 MB usage: 2.53 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 8.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(170,2.33 MB,0.765188%) FilterBlock(11,69.42 KB,0.0223009%) IndexBlock(11,138.52 KB,0.0444964%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 23 20:51:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:51:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:06.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:51:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:51:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:06.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:51:07 compute-1 ceph-mon[80135]: pgmap v282: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:51:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:07 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004410 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:07 compute-1 sudo[131778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekkhgmhalbcdozsmbvbwwcvyrufqbxaq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763931067.0183415-1653-227392667596527/AnsiballZ_edpm_container_manage.py'
Nov 23 20:51:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:51:07 compute-1 sudo[131778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:51:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:07 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:07 compute-1 python3[131780]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 20:51:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:07 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480002100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:08.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:08 compute-1 ceph-mon[80135]: pgmap v283: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:51:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:08.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:09 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:09 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:09 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:51:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:10.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:51:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:51:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:10.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:51:11 compute-1 ceph-mon[80135]: pgmap v284: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:51:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:11 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480002100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:11 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:11 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004450 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:12 compute-1 ceph-mon[80135]: pgmap v285: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 20:51:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:12.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:12.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:51:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:13 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:13 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480002100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:13 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:14.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:14 compute-1 ceph-mon[80135]: pgmap v286: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:51:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:51:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:14.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:51:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:15 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:15 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:15 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480002100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:16.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:51:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:16.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:51:16 compute-1 sudo[131896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:51:16 compute-1 sudo[131896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:51:16 compute-1 sudo[131896]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:16 compute-1 sudo[131921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 20:51:16 compute-1 sudo[131921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:51:16 compute-1 sshd-session[131894]: Invalid user jose from 118.145.189.160 port 39768
Nov 23 20:51:17 compute-1 sshd-session[131894]: Received disconnect from 118.145.189.160 port 39768:11: Bye Bye [preauth]
Nov 23 20:51:17 compute-1 sshd-session[131894]: Disconnected from invalid user jose 118.145.189.160 port 39768 [preauth]
Nov 23 20:51:17 compute-1 sudo[131921]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:17 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:17 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004490 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:17 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:51:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:18.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:51:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:51:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:18.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:51:18 compute-1 sudo[131978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:51:18 compute-1 sudo[131978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:51:18 compute-1 sudo[131978]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:19 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:51:19 compute-1 ceph-mon[80135]: pgmap v287: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:51:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:19 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480002100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:19 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24540040a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:19 compute-1 podman[131793]: 2025-11-23 20:51:19.888918858 +0000 UTC m=+11.967020025 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 23 20:51:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:19 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740044b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:20 compute-1 podman[132026]: 2025-11-23 20:51:20.000535723 +0000 UTC m=+0.020889910 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 23 20:51:20 compute-1 podman[132026]: 2025-11-23 20:51:20.120469888 +0000 UTC m=+0.140824065 container create 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 20:51:20 compute-1 python3[131780]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 23 20:51:20 compute-1 ceph-mon[80135]: pgmap v288: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:51:20 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:51:20 compute-1 ceph-mon[80135]: pgmap v289: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:51:20 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:51:20 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:51:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:51:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:20.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:51:20 compute-1 sudo[131778]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:20.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:20 compute-1 sudo[132214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szvqgzdehuufnabivuqyuhkuefwzglnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931080.5581393-1677-169786614138564/AnsiballZ_stat.py'
Nov 23 20:51:20 compute-1 sudo[132214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:51:21 compute-1 python3.9[132216]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:51:21 compute-1 sudo[132214]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:21 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:21 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:21 compute-1 ceph-mon[80135]: pgmap v290: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 20:51:21 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:51:21 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 20:51:21 compute-1 sudo[132369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpnnugwffusuwtqoatpeoofjshzwxfsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931081.5882535-1704-225519080089588/AnsiballZ_file.py'
Nov 23 20:51:21 compute-1 sudo[132369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:51:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:21 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24540040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:22 compute-1 python3.9[132371]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:51:22 compute-1 sudo[132369]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:22.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:22 compute-1 sudo[132445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djfasizbvabypngwczfxaqdzkdjjohjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931081.5882535-1704-225519080089588/AnsiballZ_stat.py'
Nov 23 20:51:22 compute-1 sudo[132445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:51:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:22.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:22 compute-1 python3.9[132447]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:51:22 compute-1 sudo[132445]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:22 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:51:22 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:51:22 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 20:51:22 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 20:51:22 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:51:23 compute-1 sudo[132596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tctlylhstqnabhxuxddnvhehzrjrshxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931082.6954741-1704-28837890510092/AnsiballZ_copy.py'
Nov 23 20:51:23 compute-1 sudo[132596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:51:23 compute-1 python3.9[132598]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763931082.6954741-1704-28837890510092/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:51:23 compute-1 sudo[132596]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:23 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24540040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:23 compute-1 sudo[132673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecpkpeyrdkcufwdlmkcmhmxsrxqihfue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931082.6954741-1704-28837890510092/AnsiballZ_systemd.py'
Nov 23 20:51:23 compute-1 sudo[132673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:51:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:23 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:23 compute-1 python3.9[132675]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 20:51:23 compute-1 systemd[1]: Reloading.
Nov 23 20:51:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:23 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:23 compute-1 systemd-rc-local-generator[132703]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:51:23 compute-1 systemd-sysv-generator[132707]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:51:24 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:51:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:51:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:24.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:51:24 compute-1 sudo[132673]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:24.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:24 compute-1 sudo[132785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwgpstzgouppftyyokbiwgzkvfwpjwxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931082.6954741-1704-28837890510092/AnsiballZ_systemd.py'
Nov 23 20:51:24 compute-1 sudo[132785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:51:24 compute-1 python3.9[132787]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:51:24 compute-1 systemd[1]: Reloading.
Nov 23 20:51:25 compute-1 systemd-rc-local-generator[132814]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:51:25 compute-1 systemd-sysv-generator[132818]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:51:25 compute-1 systemd[1]: Starting ovn_controller container...
Nov 23 20:51:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:25 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740044f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:25 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24540040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:25 compute-1 ceph-mon[80135]: pgmap v291: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:51:25 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:51:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24963beabbf068cbcc5810ef578cb753310562df52d20741745cffaa9d82c286/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 23 20:51:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:25 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24540040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:26.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:26 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87.
Nov 23 20:51:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:26.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:27 compute-1 podman[132828]: 2025-11-23 20:51:27.063264131 +0000 UTC m=+1.761016237 container init 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 20:51:27 compute-1 ovn_controller[132845]: + sudo -E kolla_set_configs
Nov 23 20:51:27 compute-1 podman[132828]: 2025-11-23 20:51:27.092641954 +0000 UTC m=+1.790394040 container start 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 23 20:51:27 compute-1 systemd[1]: Created slice User Slice of UID 0.
Nov 23 20:51:27 compute-1 ceph-mon[80135]: pgmap v292: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:51:27 compute-1 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 23 20:51:27 compute-1 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 23 20:51:27 compute-1 systemd[1]: Starting User Manager for UID 0...
Nov 23 20:51:27 compute-1 systemd[132867]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Nov 23 20:51:27 compute-1 edpm-start-podman-container[132828]: ovn_controller
Nov 23 20:51:27 compute-1 edpm-start-podman-container[132827]: Creating additional drop-in dependency for "ovn_controller" (5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87)
Nov 23 20:51:27 compute-1 podman[132854]: 2025-11-23 20:51:27.217267612 +0000 UTC m=+0.113679392 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 23 20:51:27 compute-1 systemd[1]: 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87-1b473563f9c00050.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 20:51:27 compute-1 systemd[1]: 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87-1b473563f9c00050.service: Failed with result 'exit-code'.
Nov 23 20:51:27 compute-1 systemd[1]: Reloading.
Nov 23 20:51:27 compute-1 systemd[132867]: Queued start job for default target Main User Target.
Nov 23 20:51:27 compute-1 systemd[132867]: Created slice User Application Slice.
Nov 23 20:51:27 compute-1 systemd[132867]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 23 20:51:27 compute-1 systemd[132867]: Started Daily Cleanup of User's Temporary Directories.
Nov 23 20:51:27 compute-1 systemd[132867]: Reached target Paths.
Nov 23 20:51:27 compute-1 systemd[132867]: Reached target Timers.
Nov 23 20:51:27 compute-1 systemd[132867]: Starting D-Bus User Message Bus Socket...
Nov 23 20:51:27 compute-1 systemd[132867]: Starting Create User's Volatile Files and Directories...
Nov 23 20:51:27 compute-1 systemd[132867]: Finished Create User's Volatile Files and Directories.
Nov 23 20:51:27 compute-1 systemd[132867]: Listening on D-Bus User Message Bus Socket.
Nov 23 20:51:27 compute-1 systemd[132867]: Reached target Sockets.
Nov 23 20:51:27 compute-1 systemd[132867]: Reached target Basic System.
Nov 23 20:51:27 compute-1 systemd[132867]: Reached target Main User Target.
Nov 23 20:51:27 compute-1 systemd[132867]: Startup finished in 120ms.
Nov 23 20:51:27 compute-1 systemd-sysv-generator[132937]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:51:27 compute-1 systemd-rc-local-generator[132934]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:51:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:27 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24540040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:27 compute-1 systemd[1]: Started User Manager for UID 0.
Nov 23 20:51:27 compute-1 systemd[1]: Started ovn_controller container.
Nov 23 20:51:27 compute-1 systemd[1]: Started Session c1 of User root.
Nov 23 20:51:27 compute-1 sudo[132785]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:27 compute-1 ovn_controller[132845]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 20:51:27 compute-1 ovn_controller[132845]: INFO:__main__:Validating config file
Nov 23 20:51:27 compute-1 ovn_controller[132845]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 20:51:27 compute-1 ovn_controller[132845]: INFO:__main__:Writing out command to execute
Nov 23 20:51:27 compute-1 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 23 20:51:27 compute-1 ovn_controller[132845]: ++ cat /run_command
Nov 23 20:51:27 compute-1 ovn_controller[132845]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 23 20:51:27 compute-1 ovn_controller[132845]: + ARGS=
Nov 23 20:51:27 compute-1 ovn_controller[132845]: + sudo kolla_copy_cacerts
Nov 23 20:51:27 compute-1 systemd[1]: Started Session c2 of User root.
Nov 23 20:51:27 compute-1 sshd-session[132849]: Invalid user testuser from 102.176.81.29 port 60238
Nov 23 20:51:27 compute-1 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 23 20:51:27 compute-1 ovn_controller[132845]: + [[ ! -n '' ]]
Nov 23 20:51:27 compute-1 ovn_controller[132845]: + . kolla_extend_start
Nov 23 20:51:27 compute-1 ovn_controller[132845]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 23 20:51:27 compute-1 ovn_controller[132845]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 23 20:51:27 compute-1 ovn_controller[132845]: + umask 0022
Nov 23 20:51:27 compute-1 ovn_controller[132845]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 23 20:51:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:27 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004510 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 23 20:51:27 compute-1 NetworkManager[49021]: <info>  [1763931087.8074] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 23 20:51:27 compute-1 NetworkManager[49021]: <info>  [1763931087.8088] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 20:51:27 compute-1 NetworkManager[49021]: <info>  [1763931087.8113] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 23 20:51:27 compute-1 kernel: br-int: entered promiscuous mode
Nov 23 20:51:27 compute-1 NetworkManager[49021]: <info>  [1763931087.8124] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 23 20:51:27 compute-1 NetworkManager[49021]: <info>  [1763931087.8146] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00010|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00011|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00012|features|INFO|OVS Feature: ct_flush, state: supported
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00013|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00014|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00015|main|INFO|OVS feature set changed, force recompute.
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00016|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00018|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00019|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00020|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 23 20:51:27 compute-1 ovn_controller[132845]: 2025-11-23T20:51:27Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 23 20:51:27 compute-1 NetworkManager[49021]: <info>  [1763931087.8346] manager: (ovn-10e3bf-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 23 20:51:27 compute-1 NetworkManager[49021]: <info>  [1763931087.8353] manager: (ovn-6de892-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Nov 23 20:51:27 compute-1 systemd-udevd[132980]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 20:51:27 compute-1 kernel: genev_sys_6081: entered promiscuous mode
Nov 23 20:51:27 compute-1 NetworkManager[49021]: <info>  [1763931087.8548] device (genev_sys_6081): carrier: link connected
Nov 23 20:51:27 compute-1 NetworkManager[49021]: <info>  [1763931087.8551] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/21)
Nov 23 20:51:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:27 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:27 compute-1 sshd-session[132849]: Received disconnect from 102.176.81.29 port 60238:11: Bye Bye [preauth]
Nov 23 20:51:27 compute-1 sshd-session[132849]: Disconnected from invalid user testuser 102.176.81.29 port 60238 [preauth]
Nov 23 20:51:28 compute-1 ceph-mon[80135]: pgmap v293: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:51:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:28.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:51:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:28.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:51:28 compute-1 NetworkManager[49021]: <info>  [1763931088.5465] manager: (ovn-fa015a-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Nov 23 20:51:28 compute-1 sudo[133109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trlogpifnsspgbylqbcnofyxakcsxvqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931088.6386082-1788-30866719696136/AnsiballZ_command.py'
Nov 23 20:51:28 compute-1 sudo[133109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:51:29 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:51:29 compute-1 python3.9[133111]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:51:29 compute-1 ovs-vsctl[133112]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 23 20:51:29 compute-1 sudo[133109]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:29 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:29 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24540040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:29 compute-1 sudo[133263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tclzjbqgndjortlsepgkmokxzxjdakxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931089.644906-1812-11627086319488/AnsiballZ_command.py'
Nov 23 20:51:29 compute-1 sudo[133263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:51:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:29 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:30 compute-1 python3.9[133265]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:51:30 compute-1 ovs-vsctl[133267]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 23 20:51:30 compute-1 sudo[133263]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:30.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:30 compute-1 ceph-mon[80135]: pgmap v294: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:51:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:30.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:31 compute-1 sudo[133418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlmwdhajyldbwznkrxeamijhhdxbvwpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931090.8967862-1854-115738747692397/AnsiballZ_command.py'
Nov 23 20:51:31 compute-1 sudo[133418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:51:31 compute-1 python3.9[133420]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:51:31 compute-1 ovs-vsctl[133421]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 23 20:51:31 compute-1 sudo[133418]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:31 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:31 compute-1 ceph-mon[80135]: pgmap v295: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 20:51:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:31 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:31 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:31 compute-1 sshd-session[121819]: Connection closed by 192.168.122.30 port 56666
Nov 23 20:51:31 compute-1 sshd-session[121816]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:51:31 compute-1 systemd[1]: session-49.scope: Deactivated successfully.
Nov 23 20:51:31 compute-1 systemd[1]: session-49.scope: Consumed 54.980s CPU time.
Nov 23 20:51:31 compute-1 systemd-logind[793]: Session 49 logged out. Waiting for processes to exit.
Nov 23 20:51:31 compute-1 systemd-logind[793]: Removed session 49.
Nov 23 20:51:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:51:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:32.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:51:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:32.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:33 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:51:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:33 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:33 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:34 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:51:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:34.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:34.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:35 compute-1 ceph-mon[80135]: pgmap v296: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:51:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:35 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:35 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:35 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:36.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:36 compute-1 ceph-mon[80135]: pgmap v297: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:51:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:36.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:37 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:37 compute-1 sudo[133452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 20:51:37 compute-1 sudo[133452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:51:37 compute-1 sudo[133452]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:37 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800022c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:37 compute-1 systemd[1]: Stopping User Manager for UID 0...
Nov 23 20:51:37 compute-1 systemd[132867]: Activating special unit Exit the Session...
Nov 23 20:51:37 compute-1 systemd[132867]: Stopped target Main User Target.
Nov 23 20:51:37 compute-1 systemd[132867]: Stopped target Basic System.
Nov 23 20:51:37 compute-1 systemd[132867]: Stopped target Paths.
Nov 23 20:51:37 compute-1 systemd[132867]: Stopped target Sockets.
Nov 23 20:51:37 compute-1 systemd[132867]: Stopped target Timers.
Nov 23 20:51:37 compute-1 systemd[132867]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 23 20:51:37 compute-1 systemd[132867]: Closed D-Bus User Message Bus Socket.
Nov 23 20:51:37 compute-1 systemd[132867]: Stopped Create User's Volatile Files and Directories.
Nov 23 20:51:37 compute-1 systemd[132867]: Removed slice User Application Slice.
Nov 23 20:51:37 compute-1 systemd[132867]: Reached target Shutdown.
Nov 23 20:51:37 compute-1 systemd[132867]: Finished Exit the Session.
Nov 23 20:51:37 compute-1 systemd[132867]: Reached target Exit the Session.
Nov 23 20:51:37 compute-1 systemd[1]: user@0.service: Deactivated successfully.
Nov 23 20:51:37 compute-1 systemd[1]: Stopped User Manager for UID 0.
Nov 23 20:51:37 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 23 20:51:37 compute-1 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 23 20:51:37 compute-1 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 23 20:51:37 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 23 20:51:37 compute-1 systemd[1]: Removed slice User Slice of UID 0.
Nov 23 20:51:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:37 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800022c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:38 compute-1 sshd-session[133478]: Accepted publickey for zuul from 192.168.122.30 port 46582 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 20:51:38 compute-1 systemd-logind[793]: New session 51 of user zuul.
Nov 23 20:51:38 compute-1 systemd[1]: Started Session 51 of User zuul.
Nov 23 20:51:38 compute-1 sshd-session[133478]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:51:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:51:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:38.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:51:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:38.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:38 compute-1 ceph-mon[80135]: pgmap v298: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:51:38 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:51:38 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:51:38 compute-1 sudo[133582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:51:38 compute-1 sudo[133582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:51:38 compute-1 sudo[133582]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:39 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:51:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:39 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:39 compute-1 ceph-mon[80135]: pgmap v299: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:51:39 compute-1 python3.9[133656]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:51:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:39 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:39 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:40.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:40.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:40 compute-1 sudo[133811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iekemhrfktvgqxraeeeifxyrnathohlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931100.0834832-63-11314264578099/AnsiballZ_file.py'
Nov 23 20:51:40 compute-1 sudo[133811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:51:40 compute-1 python3.9[133813]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:51:40 compute-1 sudo[133811]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:41 compute-1 sudo[133963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpqemqiqqgfjcukzootbhigjkdcsmmuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931100.8525553-63-198590490762156/AnsiballZ_file.py'
Nov 23 20:51:41 compute-1 sudo[133963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:51:41 compute-1 python3.9[133965]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:51:41 compute-1 sudo[133963]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800022e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:41 compute-1 sudo[134116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywrbkmofleezfqzdostituhgpzakfxis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931101.4581141-63-210202489169265/AnsiballZ_file.py'
Nov 23 20:51:41 compute-1 sudo[134116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:51:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:41 compute-1 ceph-mon[80135]: pgmap v300: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 20:51:41 compute-1 python3.9[134118]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:51:41 compute-1 sudo[134116]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:42 compute-1 sudo[134268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gickiziciqsfaijiiyrpealaqbykeyka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931102.020909-63-146415031622720/AnsiballZ_file.py'
Nov 23 20:51:42 compute-1 sudo[134268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:51:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:42.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:42 compute-1 python3.9[134270]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:51:42 compute-1 sudo[134268]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:51:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:42.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:51:42 compute-1 sudo[134420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwcadifdosypbdfzlffyjgwciumjsuqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931102.575656-63-278371984704681/AnsiballZ_file.py'
Nov 23 20:51:42 compute-1 sudo[134420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:51:43 compute-1 python3.9[134422]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:51:43 compute-1 sudo[134420]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:43 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:43 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:43 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:44 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:51:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:51:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:44.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:51:44 compute-1 ceph-mon[80135]: pgmap v301: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:51:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:44.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:44 compute-1 python3.9[134573]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:51:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:45 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480002320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:45 compute-1 sudo[134724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enuiaiheajgkezgkfxcwvrhbeensbyac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931105.1558244-195-1834766390275/AnsiballZ_seboolean.py'
Nov 23 20:51:45 compute-1 sudo[134724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:51:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:45 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:45 compute-1 python3.9[134726]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 23 20:51:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:45 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:46.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:46 compute-1 ceph-mon[80135]: pgmap v302: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:51:46 compute-1 sudo[134724]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:46.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:47 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205147 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 20:51:47 compute-1 ceph-mon[80135]: pgmap v303: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:51:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:47 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480002340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:47 compute-1 python3.9[134878]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:51:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:47 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480002340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:48.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:51:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:48.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:51:48 compute-1 python3.9[135001]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931107.2444031-219-244349140641201/.source follow=False _original_basename=haproxy.j2 checksum=deae64da24ad28f71dc47276f2e9f268f19a4519 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:51:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:51:49 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:51:49 compute-1 python3.9[135152]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:51:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:49 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:49 compute-1 ceph-mon[80135]: pgmap v304: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:51:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:49 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2450000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:49 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:49 compute-1 python3.9[135274]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931108.8271143-264-108775523783921/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:51:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:51:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:50.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:51:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:50.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:50 compute-1 sudo[135424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymoxvvmfgqifmulgynukwwfvcnhapnrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931110.4912794-315-225050890699344/AnsiballZ_setup.py'
Nov 23 20:51:50 compute-1 sudo[135424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:51:51 compute-1 python3.9[135426]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 20:51:51 compute-1 sudo[135424]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:51 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480002360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:51 compute-1 sudo[135509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuwngkcntclmebfonsqhjzmflhivbctv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931110.4912794-315-225050890699344/AnsiballZ_dnf.py'
Nov 23 20:51:51 compute-1 sudo[135509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:51:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:51 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:51 compute-1 ceph-mon[80135]: pgmap v305: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:51:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:51 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:52 compute-1 python3.9[135511]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 20:51:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:52.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:51:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:52.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:51:53 compute-1 sudo[135509]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:53 compute-1 sshd-session[135585]: Invalid user solv from 161.35.133.66 port 50654
Nov 23 20:51:53 compute-1 sshd-session[135585]: Connection closed by invalid user solv 161.35.133.66 port 50654 [preauth]
Nov 23 20:51:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480002380 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:53 compute-1 ceph-mon[80135]: pgmap v306: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:51:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:54 compute-1 sudo[135665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wofhljsuqnckqaoptumjnehiovvabsyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931113.5385165-351-248797935223365/AnsiballZ_systemd.py'
Nov 23 20:51:54 compute-1 sudo[135665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:51:54 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:51:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:54.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:54 compute-1 python3.9[135667]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 20:51:54 compute-1 sudo[135665]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:54.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:55 compute-1 python3.9[135820]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:51:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:55 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:55 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:55 compute-1 python3.9[135942]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931114.8181174-375-143028732306432/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:51:55 compute-1 ceph-mon[80135]: pgmap v307: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:51:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:55 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800023a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:56.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:56 compute-1 python3.9[136092]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:51:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:56.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:56 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 20:51:57 compute-1 python3.9[136213]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931115.9870987-375-133835235041949/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:51:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:57 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:57 compute-1 ovn_controller[132845]: 2025-11-23T20:51:57Z|00025|memory|INFO|16768 kB peak resident set size after 29.9 seconds
Nov 23 20:51:57 compute-1 ovn_controller[132845]: 2025-11-23T20:51:57Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Nov 23 20:51:57 compute-1 podman[136239]: 2025-11-23 20:51:57.748197466 +0000 UTC m=+0.166164172 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 20:51:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:57 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:57 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:57 compute-1 ceph-mon[80135]: pgmap v308: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:51:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:58.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:51:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:51:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:58.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:51:58 compute-1 python3.9[136389]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:51:58 compute-1 sudo[136437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:51:58 compute-1 sudo[136437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:51:58 compute-1 sudo[136437]: pam_unix(sudo:session): session closed for user root
Nov 23 20:51:59 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:51:59 compute-1 python3.9[136535]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931118.254356-507-42289708055651/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:51:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:59 compute-1 python3.9[136686]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:51:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 20:51:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:51:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740048b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:51:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:51:59 compute-1 ceph-mon[80135]: pgmap v309: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:52:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:00.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:00 compute-1 python3.9[136807]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931119.3901117-507-152641601991289/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:52:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:00.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:01 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:01 compute-1 python3.9[136957]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:52:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:01 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:01 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:01 compute-1 ceph-mon[80135]: pgmap v310: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 20:52:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:02.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:02 compute-1 sudo[137110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nelmkespnqsilohhwtnybdswofgbuhtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931122.0698473-621-97625255062955/AnsiballZ_file.py'
Nov 23 20:52:02 compute-1 sudo[137110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:02 compute-1 python3.9[137112]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:52:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:02.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:02 compute-1 sudo[137110]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:02 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 20:52:03 compute-1 sudo[137262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myfamvqyhwspabqgiqiotniaodmdcpqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931122.9119074-645-182019141763429/AnsiballZ_stat.py'
Nov 23 20:52:03 compute-1 sudo[137262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:03 compute-1 python3.9[137264]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:52:03 compute-1 sudo[137262]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:03 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740048d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:03 compute-1 sudo[137341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbvircrgssmtlxiilxuvhaevumtwwfcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931122.9119074-645-182019141763429/AnsiballZ_file.py'
Nov 23 20:52:03 compute-1 sudo[137341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:03 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:03 compute-1 python3.9[137343]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:52:03 compute-1 sudo[137341]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:03 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:04 compute-1 ceph-mon[80135]: pgmap v311: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 20:52:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:52:04 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:52:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:52:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:04.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:52:04 compute-1 sudo[137493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qncpvxwtwzgtfxhrrxzvrmyiavtoeets ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931124.0374203-645-69170871762795/AnsiballZ_stat.py'
Nov 23 20:52:04 compute-1 sudo[137493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:04.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:04 compute-1 python3.9[137495]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:52:04 compute-1 sudo[137493]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:04 compute-1 sudo[137571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utjasutizrwztoovsspqpoxjlygcxddl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931124.0374203-645-69170871762795/AnsiballZ_file.py'
Nov 23 20:52:04 compute-1 sudo[137571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:05 compute-1 python3.9[137573]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:52:05 compute-1 sudo[137571]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:05 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:05 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740048f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:05 compute-1 sudo[137724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxozsemytksazmdbddxpgciyrbplknjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931125.605628-714-266827902684252/AnsiballZ_file.py'
Nov 23 20:52:05 compute-1 sudo[137724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:05 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740048f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:06 compute-1 ceph-mon[80135]: pgmap v312: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 20:52:06 compute-1 python3.9[137726]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:52:06 compute-1 sudo[137724]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:06.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:06.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:06 compute-1 sudo[137876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzmbhrtfxwwpdalkzlqijxnbmfrnudke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931126.4824584-738-257393919884062/AnsiballZ_stat.py'
Nov 23 20:52:06 compute-1 sudo[137876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:07 compute-1 python3.9[137878]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:52:07 compute-1 sudo[137876]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:07 compute-1 sudo[137954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uffdkbvujvgsvikgftvroiibjnzivyxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931126.4824584-738-257393919884062/AnsiballZ_file.py'
Nov 23 20:52:07 compute-1 sudo[137954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:07 compute-1 python3.9[137956]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:52:07 compute-1 sudo[137954]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:07 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:07 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:07 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740048f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:08 compute-1 ceph-mon[80135]: pgmap v313: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:52:08 compute-1 sudo[138107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uetnsimoqaioiaepjfqgoiwdscubkjaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931127.9013176-774-186099284232896/AnsiballZ_stat.py'
Nov 23 20:52:08 compute-1 sudo[138107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:08.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:08 compute-1 python3.9[138109]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:52:08 compute-1 sudo[138107]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:08.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:09 compute-1 sudo[138185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svcnvjzwvosajodbksghgsyosklpsmux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931127.9013176-774-186099284232896/AnsiballZ_file.py'
Nov 23 20:52:09 compute-1 sudo[138185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:09 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:52:09 compute-1 python3.9[138187]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:52:09 compute-1 sudo[138185]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:09 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205209 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 20:52:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:09 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:09 compute-1 sudo[138338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwxzxglmjamirsifhfommqnlsotiorhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931129.5531077-810-25791522668168/AnsiballZ_systemd.py'
Nov 23 20:52:09 compute-1 sudo[138338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:09 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:10 compute-1 ceph-mon[80135]: pgmap v314: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:52:10 compute-1 python3.9[138340]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:52:10 compute-1 systemd[1]: Reloading.
Nov 23 20:52:10 compute-1 systemd-rc-local-generator[138369]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:52:10 compute-1 systemd-sysv-generator[138372]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:52:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:10.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:10 compute-1 sudo[138338]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:52:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:10.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:52:11 compute-1 sudo[138528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoislsdigoeudqmhlsomamjixxmzavjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931130.9992595-834-204985289535203/AnsiballZ_stat.py'
Nov 23 20:52:11 compute-1 sudo[138528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:11 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:11 compute-1 python3.9[138530]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:52:11 compute-1 sudo[138528]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:11 compute-1 sudo[138607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkjyttfnnjubdlzkejjpkoobwfqqrlll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931130.9992595-834-204985289535203/AnsiballZ_file.py'
Nov 23 20:52:11 compute-1 sudo[138607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:11 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:11 compute-1 python3.9[138609]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:52:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:11 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:11 compute-1 sudo[138607]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:12 compute-1 ceph-mon[80135]: pgmap v315: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:52:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:12.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:12.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:12 compute-1 sudo[138759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bltljrsmuecgzqsaiktctxywumtaukxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931132.3674667-870-181652148017140/AnsiballZ_stat.py'
Nov 23 20:52:12 compute-1 sudo[138759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:12 compute-1 python3.9[138761]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:52:12 compute-1 sudo[138759]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:13 compute-1 sudo[138837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nevgjznbyjhzrhnjbkombfwqxhijjvoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931132.3674667-870-181652148017140/AnsiballZ_file.py'
Nov 23 20:52:13 compute-1 sudo[138837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:13 compute-1 python3.9[138839]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:52:13 compute-1 sudo[138837]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:13 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:13 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:13 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:14 compute-1 sudo[138990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubxgytifkwtgmkhnfpcoqpnlqqztipzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931133.7344446-906-141442721036648/AnsiballZ_systemd.py'
Nov 23 20:52:14 compute-1 sudo[138990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:14 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:52:14 compute-1 ceph-mon[80135]: pgmap v316: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 23 20:52:14 compute-1 python3.9[138992]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:52:14 compute-1 systemd[1]: Reloading.
Nov 23 20:52:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:14.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:14 compute-1 systemd-rc-local-generator[139019]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:52:14 compute-1 systemd-sysv-generator[139023]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:52:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:14.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:14 compute-1 systemd[1]: Starting Create netns directory...
Nov 23 20:52:14 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 23 20:52:14 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 23 20:52:14 compute-1 systemd[1]: Finished Create netns directory.
Nov 23 20:52:14 compute-1 sudo[138990]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:15 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:15 compute-1 sudo[139183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chrisaljgmwqbbjexoschngtgbfuldvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931135.2484221-936-145123936917951/AnsiballZ_file.py'
Nov 23 20:52:15 compute-1 sudo[139183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:15 compute-1 python3.9[139185]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:52:15 compute-1 sudo[139183]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:15 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:15 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:16 compute-1 ceph-mon[80135]: pgmap v317: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Nov 23 20:52:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:16.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:16 compute-1 sudo[139335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwmvowjvdwpenzagnsvigtderxroxrnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931136.0811546-960-57152904868133/AnsiballZ_stat.py'
Nov 23 20:52:16 compute-1 sudo[139335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:16 compute-1 python3.9[139337]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:52:16 compute-1 sudo[139335]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:16.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:16 compute-1 sudo[139458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayiqhaoaawgzwwtgcifrodanvsfviddd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931136.0811546-960-57152904868133/AnsiballZ_copy.py'
Nov 23 20:52:16 compute-1 sudo[139458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:17 compute-1 python3.9[139460]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931136.0811546-960-57152904868133/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:52:17 compute-1 sudo[139458]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:17 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:17 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:17 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:18 compute-1 sudo[139611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rryhyomtyakridjmrkukwqkgordqjgau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931137.9018338-1011-222541309260886/AnsiballZ_file.py'
Nov 23 20:52:18 compute-1 sudo[139611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:18 compute-1 ceph-mon[80135]: pgmap v318: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:52:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:52:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:18.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:18 compute-1 python3.9[139613]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:52:18 compute-1 sudo[139611]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:18.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:18 compute-1 sudo[139763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tolbfnbbdnkdgukdxumzjkijjkuziisd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931138.6952834-1035-43771545588682/AnsiballZ_stat.py'
Nov 23 20:52:18 compute-1 sudo[139763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:18 compute-1 sudo[139764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:52:18 compute-1 sudo[139764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:52:18 compute-1 sudo[139764]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:19 compute-1 python3.9[139771]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:52:19 compute-1 sudo[139763]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:19 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:52:19 compute-1 sudo[139911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krriliiasktymyloygvouszechycksct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931138.6952834-1035-43771545588682/AnsiballZ_copy.py'
Nov 23 20:52:19 compute-1 sudo[139911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:19 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:19 compute-1 python3.9[139913]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931138.6952834-1035-43771545588682/.source.json _original_basename=.0r_1a0mz follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:52:19 compute-1 sudo[139911]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:19 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:19 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448001070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:20 compute-1 ceph-mon[80135]: pgmap v319: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:52:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:20.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:20 compute-1 sudo[140066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hroodhcpnvwlitmghigsizeuuwgswnhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931140.12713-1080-252798488023152/AnsiballZ_file.py'
Nov 23 20:52:20 compute-1 sudo[140066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:20.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:20 compute-1 python3.9[140068]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:52:20 compute-1 sudo[140066]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:21 compute-1 sudo[140218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjvvbwnldieuwsrmoodhugdohqkedcvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931141.009015-1104-237250671575003/AnsiballZ_stat.py'
Nov 23 20:52:21 compute-1 sudo[140218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:21 compute-1 sudo[140218]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:21 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:21 compute-1 sudo[140342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fguufdvsrldibienyhffgtnvmujuspwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931141.009015-1104-237250671575003/AnsiballZ_copy.py'
Nov 23 20:52:21 compute-1 sudo[140342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:21 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:21 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:22 compute-1 sudo[140342]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:22 compute-1 ceph-mon[80135]: pgmap v320: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:52:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:22.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 20:52:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:22.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 20:52:23 compute-1 sudo[140494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvrkrcpsipyvxzykpqxqxensebljvbtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931142.6127443-1155-273880752736997/AnsiballZ_container_config_data.py'
Nov 23 20:52:23 compute-1 sudo[140494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:23 compute-1 python3.9[140496]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 23 20:52:23 compute-1 sudo[140494]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:23 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448001070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:23 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:23 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:24 compute-1 sudo[140647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gblqyiyzikiwqqulwyufchedzypsjgii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931143.6401064-1182-236515085316713/AnsiballZ_container_config_hash.py'
Nov 23 20:52:24 compute-1 sudo[140647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:24 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:52:24 compute-1 python3.9[140649]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 20:52:24 compute-1 ceph-mon[80135]: pgmap v321: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:52:24 compute-1 sudo[140647]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:24.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:24.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:25 compute-1 sudo[140799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhwiuxycvyvomoexzthpcadofszcibey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931144.670408-1209-7728097403945/AnsiballZ_podman_container_info.py'
Nov 23 20:52:25 compute-1 sudo[140799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:25 compute-1 python3.9[140801]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 23 20:52:25 compute-1 sudo[140799]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:25 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:25 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448001070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:25 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:26 compute-1 ceph-mon[80135]: pgmap v322: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 20:52:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:52:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:26.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:52:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:26.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:27 compute-1 sudo[140979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jizfttolzsyyeluhveypwbmmovkrydsx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763931146.5558286-1248-127582018879266/AnsiballZ_edpm_container_manage.py'
Nov 23 20:52:27 compute-1 sudo[140979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:27 compute-1 python3[140981]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 20:52:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:27 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:27 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2450003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:27 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2450003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:28 compute-1 ceph-mon[80135]: pgmap v323: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:52:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:28.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:52:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:28.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:52:28 compute-1 podman[141028]: 2025-11-23 20:52:28.687659307 +0000 UTC m=+0.097834312 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 23 20:52:29 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:52:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:29 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:29 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:29 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2450003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:30.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:30.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:30 compute-1 ceph-mon[80135]: pgmap v324: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:52:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:31 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:31 compute-1 ceph-mon[80135]: pgmap v325: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:52:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:31 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:52:31 compute-1 kernel: ganesha.nfsd[113671]: segfault at 50 ip 00007f25294d132e sp 00007f24ebffe210 error 4 in libntirpc.so.5.8[7f25294b6000+2c000] likely on CPU 3 (core 0, socket 3)
Nov 23 20:52:31 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 20:52:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:31 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004990 fd 38 proxy ignored for local
Nov 23 20:52:31 compute-1 systemd[1]: Started Process Core Dump (PID 141089/UID 0).
Nov 23 20:52:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:32.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:32.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:34.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:34.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:35 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:52:35 compute-1 systemd-coredump[141090]: Process 112675 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 43:
                                                    #0  0x00007f25294d132e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 23 20:52:35 compute-1 systemd[1]: systemd-coredump@1-141089-0.service: Deactivated successfully.
Nov 23 20:52:35 compute-1 systemd[1]: systemd-coredump@1-141089-0.service: Consumed 1.171s CPU time.
Nov 23 20:52:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:52:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:36.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:52:36 compute-1 ceph-mon[80135]: pgmap v326: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:52:36 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:52:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:52:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:36.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:52:37 compute-1 sudo[141114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:52:37 compute-1 sudo[141114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:52:37 compute-1 sudo[141114]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:37 compute-1 sudo[141139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 20:52:37 compute-1 sudo[141139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:52:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:38.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:38.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:38 compute-1 ceph-mon[80135]: pgmap v327: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:52:39 compute-1 sudo[141177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:52:39 compute-1 sudo[141177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:52:39 compute-1 sudo[141177]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:39 compute-1 podman[141097]: 2025-11-23 20:52:39.358436236 +0000 UTC m=+3.501685438 container died 9cce1bf66affa6ef4f347207d4a0ad972590fbbe226e35c4c7f83bf8a6579c22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 20:52:39 compute-1 ceph-mon[80135]: pgmap v328: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:52:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205239 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 20:52:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:40.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:40 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:52:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:40.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:41 compute-1 sudo[141139]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:42 compute-1 ceph-mon[80135]: pgmap v329: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:52:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:52:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:42.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:52:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:42.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:43 compute-1 systemd[1]: var-lib-containers-storage-overlay-36431d4c51e2d3482a2149cb2663510026d0fcb8438692ee02935721d35a5258-merged.mount: Deactivated successfully.
Nov 23 20:52:44 compute-1 ceph-mon[80135]: pgmap v330: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:52:44 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:52:44 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 20:52:44 compute-1 podman[141097]: 2025-11-23 20:52:44.108531885 +0000 UTC m=+8.251781077 container remove 9cce1bf66affa6ef4f347207d4a0ad972590fbbe226e35c4c7f83bf8a6579c22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 20:52:44 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Main process exited, code=exited, status=139/n/a
Nov 23 20:52:44 compute-1 podman[140994]: 2025-11-23 20:52:44.155168365 +0000 UTC m=+16.750071148 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 20:52:44 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Failed with result 'exit-code'.
Nov 23 20:52:44 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.780s CPU time.
Nov 23 20:52:44 compute-1 podman[141311]: 2025-11-23 20:52:44.320933453 +0000 UTC m=+0.058962640 container create ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 23 20:52:44 compute-1 podman[141311]: 2025-11-23 20:52:44.288410983 +0000 UTC m=+0.026440190 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 20:52:44 compute-1 python3[140981]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 20:52:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:52:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:44.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:52:44 compute-1 sudo[140979]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:44.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:44 compute-1 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 23 20:52:45 compute-1 ceph-mon[80135]: pgmap v331: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:52:45 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:52:45 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:52:45 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 20:52:45 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 20:52:45 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:52:45 compute-1 sudo[141497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xetnwfkryqjoxfgcxldpzooapkknhkxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931164.755718-1272-199143218137949/AnsiballZ_stat.py'
Nov 23 20:52:45 compute-1 sudo[141497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:45 compute-1 python3.9[141499]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:52:45 compute-1 sudo[141497]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:45 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:52:46 compute-1 sudo[141652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttoxyboiyqzfbjuqsodaoeymanpklyjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931165.784661-1299-278713179768641/AnsiballZ_file.py'
Nov 23 20:52:46 compute-1 sudo[141652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:46 compute-1 ceph-mon[80135]: pgmap v332: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:52:46 compute-1 python3.9[141654]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:52:46 compute-1 sudo[141652]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:46.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:46 compute-1 sudo[141728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovydccobbxepojmyfbupzgtjfiryosci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931165.784661-1299-278713179768641/AnsiballZ_stat.py'
Nov 23 20:52:46 compute-1 sudo[141728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:46.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:46 compute-1 python3.9[141730]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:52:46 compute-1 sudo[141728]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:47 compute-1 sudo[141879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oevnkidgqbazfhhvpwswrovmjlvfktbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931166.7258358-1299-71852450716199/AnsiballZ_copy.py'
Nov 23 20:52:47 compute-1 sudo[141879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:47 compute-1 python3.9[141881]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763931166.7258358-1299-71852450716199/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:52:47 compute-1 sudo[141879]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:47 compute-1 sudo[141956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsyibnonjgvgedkusqnkbbraddxnjzdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931166.7258358-1299-71852450716199/AnsiballZ_systemd.py'
Nov 23 20:52:47 compute-1 sudo[141956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:47 compute-1 python3.9[141958]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 20:52:47 compute-1 systemd[1]: Reloading.
Nov 23 20:52:47 compute-1 systemd-rc-local-generator[141981]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:52:47 compute-1 systemd-sysv-generator[141986]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:52:48 compute-1 ceph-mon[80135]: pgmap v333: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:52:48 compute-1 sudo[141956]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:52:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:48.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:52:48 compute-1 sudo[142069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoszummwjbxozrdevnrgwrwlnfzlzksb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931166.7258358-1299-71852450716199/AnsiballZ_systemd.py'
Nov 23 20:52:48 compute-1 sudo[142069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:52:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:48.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:52:48 compute-1 python3.9[142071]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:52:48 compute-1 sudo[142072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 20:52:48 compute-1 sudo[142072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:52:48 compute-1 sudo[142072]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:48 compute-1 systemd[1]: Reloading.
Nov 23 20:52:48 compute-1 systemd-sysv-generator[142129]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:52:48 compute-1 systemd-rc-local-generator[142126]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:52:49 compute-1 systemd[1]: Starting ovn_metadata_agent container...
Nov 23 20:52:49 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:52:49 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:52:49 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:52:49 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:52:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4aa44a3bea29e4c51158edcd152131bb41d9075b6cc6f242435ec532892ba2/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 23 20:52:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4aa44a3bea29e4c51158edcd152131bb41d9075b6cc6f242435ec532892ba2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 20:52:49 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab.
Nov 23 20:52:49 compute-1 podman[142137]: 2025-11-23 20:52:49.15189735 +0000 UTC m=+0.116449550 container init ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 20:52:49 compute-1 ovn_metadata_agent[142153]: + sudo -E kolla_set_configs
Nov 23 20:52:49 compute-1 podman[142137]: 2025-11-23 20:52:49.178897893 +0000 UTC m=+0.143450063 container start ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 23 20:52:49 compute-1 edpm-start-podman-container[142137]: ovn_metadata_agent
Nov 23 20:52:49 compute-1 ovn_metadata_agent[142153]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 20:52:49 compute-1 ovn_metadata_agent[142153]: INFO:__main__:Validating config file
Nov 23 20:52:49 compute-1 ovn_metadata_agent[142153]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 20:52:49 compute-1 ovn_metadata_agent[142153]: INFO:__main__:Copying service configuration files
Nov 23 20:52:49 compute-1 ovn_metadata_agent[142153]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 23 20:52:49 compute-1 ovn_metadata_agent[142153]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 23 20:52:49 compute-1 ovn_metadata_agent[142153]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 23 20:52:49 compute-1 ovn_metadata_agent[142153]: INFO:__main__:Writing out command to execute
Nov 23 20:52:49 compute-1 ovn_metadata_agent[142153]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 23 20:52:49 compute-1 ovn_metadata_agent[142153]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 23 20:52:49 compute-1 ovn_metadata_agent[142153]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 23 20:52:49 compute-1 ovn_metadata_agent[142153]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 23 20:52:49 compute-1 ovn_metadata_agent[142153]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 23 20:52:49 compute-1 ovn_metadata_agent[142153]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 23 20:52:49 compute-1 ovn_metadata_agent[142153]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 23 20:52:49 compute-1 edpm-start-podman-container[142136]: Creating additional drop-in dependency for "ovn_metadata_agent" (ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab)
Nov 23 20:52:49 compute-1 ovn_metadata_agent[142153]: ++ cat /run_command
Nov 23 20:52:49 compute-1 ovn_metadata_agent[142153]: + CMD=neutron-ovn-metadata-agent
Nov 23 20:52:49 compute-1 ovn_metadata_agent[142153]: + ARGS=
Nov 23 20:52:49 compute-1 ovn_metadata_agent[142153]: + sudo kolla_copy_cacerts
Nov 23 20:52:49 compute-1 systemd[1]: Reloading.
Nov 23 20:52:49 compute-1 ovn_metadata_agent[142153]: + [[ ! -n '' ]]
Nov 23 20:52:49 compute-1 ovn_metadata_agent[142153]: + . kolla_extend_start
Nov 23 20:52:49 compute-1 ovn_metadata_agent[142153]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 23 20:52:49 compute-1 ovn_metadata_agent[142153]: Running command: 'neutron-ovn-metadata-agent'
Nov 23 20:52:49 compute-1 ovn_metadata_agent[142153]: + umask 0022
Nov 23 20:52:49 compute-1 ovn_metadata_agent[142153]: + exec neutron-ovn-metadata-agent
Nov 23 20:52:49 compute-1 podman[142160]: 2025-11-23 20:52:49.262603784 +0000 UTC m=+0.074698301 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Nov 23 20:52:49 compute-1 systemd-sysv-generator[142233]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:52:49 compute-1 systemd-rc-local-generator[142228]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:52:49 compute-1 sshd-session[141994]: Received disconnect from 102.176.81.29 port 34494:11: Bye Bye [preauth]
Nov 23 20:52:49 compute-1 sshd-session[141994]: Disconnected from authenticating user root 102.176.81.29 port 34494 [preauth]
Nov 23 20:52:49 compute-1 systemd[1]: Started ovn_metadata_agent container.
Nov 23 20:52:49 compute-1 sudo[142069]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:50 compute-1 ceph-mon[80135]: pgmap v334: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:52:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:50.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:50 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:52:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:50.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.985 142158 INFO neutron.common.config [-] Logging enabled!
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.986 142158 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.986 142158 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.987 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.987 142158 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.987 142158 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.988 142158 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.988 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.988 142158 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.988 142158 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.988 142158 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.988 142158 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.989 142158 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.989 142158 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.989 142158 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.989 142158 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.989 142158 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.989 142158 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.990 142158 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.990 142158 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.990 142158 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.990 142158 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.990 142158 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.990 142158 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.990 142158 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.991 142158 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.991 142158 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.991 142158 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.991 142158 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.991 142158 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.991 142158 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.992 142158 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.992 142158 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.992 142158 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.992 142158 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.992 142158 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.993 142158 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.993 142158 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.993 142158 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.993 142158 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.993 142158 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.993 142158 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.994 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.994 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.994 142158 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.994 142158 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.994 142158 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.994 142158 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.994 142158 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.995 142158 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.995 142158 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.995 142158 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.995 142158 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.995 142158 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.995 142158 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.996 142158 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.996 142158 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.996 142158 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.996 142158 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.996 142158 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.996 142158 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.997 142158 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.997 142158 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.997 142158 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.997 142158 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.997 142158 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.997 142158 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.998 142158 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.998 142158 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.998 142158 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.998 142158 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.998 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.998 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.999 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.999 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.999 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.999 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.999 142158 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:50 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.999 142158 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.999 142158 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.000 142158 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.000 142158 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.000 142158 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.000 142158 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.000 142158 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.000 142158 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.000 142158 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.001 142158 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.001 142158 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.001 142158 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.001 142158 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.001 142158 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.001 142158 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.002 142158 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.002 142158 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.002 142158 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.002 142158 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.002 142158 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.002 142158 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.002 142158 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.002 142158 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.003 142158 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.003 142158 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.003 142158 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.003 142158 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.003 142158 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.003 142158 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.004 142158 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.004 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.004 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.004 142158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.004 142158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.004 142158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.004 142158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.005 142158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.005 142158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.005 142158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.005 142158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.005 142158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.005 142158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.006 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.006 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.006 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.006 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.006 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.007 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.007 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.007 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.008 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.008 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.008 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.008 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.008 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.008 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.008 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.009 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.009 142158 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.009 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.009 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.009 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.009 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.009 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.010 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.010 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.010 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.010 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.010 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.010 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.011 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.011 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.011 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.011 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.011 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.011 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.012 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.012 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.012 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.012 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.012 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.012 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.012 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.013 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.013 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.013 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.013 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.013 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.013 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.013 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.014 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.014 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.014 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.014 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.014 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.014 142158 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.015 142158 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.015 142158 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.015 142158 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.015 142158 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.015 142158 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.015 142158 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.015 142158 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.016 142158 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.016 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.016 142158 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.016 142158 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.016 142158 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.016 142158 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.017 142158 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.017 142158 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.017 142158 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.017 142158 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.017 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.017 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.018 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.018 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.018 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.018 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.018 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.018 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.019 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.019 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.019 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.019 142158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.019 142158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.019 142158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.020 142158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.020 142158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.020 142158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.020 142158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.021 142158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.021 142158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.021 142158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.021 142158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.021 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.021 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.022 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.022 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.022 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.022 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.022 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.023 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.023 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.023 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.023 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.023 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.023 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.023 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.024 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.024 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.024 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.024 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.024 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.024 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.025 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.025 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.025 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.025 142158 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.025 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.025 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.026 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.026 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.026 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.026 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.026 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.026 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.027 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.027 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.027 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.027 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.027 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.028 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.028 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.028 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.028 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.028 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.028 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.029 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.029 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.029 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.029 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.029 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.029 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.030 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.030 142158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.030 142158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.030 142158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.030 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.030 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.030 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.031 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.031 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.031 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.031 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.031 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.031 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.032 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.032 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.032 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.032 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.032 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.032 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.033 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.033 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.033 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.033 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.033 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.033 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.034 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.034 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.034 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.034 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.034 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.034 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.035 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.035 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.035 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.035 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.035 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.035 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.035 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.036 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.036 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.036 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.036 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.045 142158 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.046 142158 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.046 142158 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.047 142158 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.047 142158 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.062 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name d8ff4ac4-2bee-48db-b79e-2466bc4db046 (UUID: d8ff4ac4-2bee-48db-b79e-2466bc4db046) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.098 142158 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.098 142158 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.098 142158 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.099 142158 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.103 142158 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.110 142158 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.117 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'd8ff4ac4-2bee-48db-b79e-2466bc4db046'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], external_ids={}, name=d8ff4ac4-2bee-48db-b79e-2466bc4db046, nb_cfg_timestamp=1763931095829, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.118 142158 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f53b965ef70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.119 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.119 142158 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.120 142158 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.120 142158 INFO oslo_service.service [-] Starting 1 workers
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.124 142158 DEBUG oslo_service.service [-] Started child 142266 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.127 142266 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-954591'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.128 142158 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpnjoxuln_/privsep.sock']
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.192 142266 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.192 142266 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.192 142266 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.195 142266 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.202 142266 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.208 142266 INFO eventlet.wsgi.server [-] (142266) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Nov 23 20:52:51 compute-1 sshd-session[133481]: Connection closed by 192.168.122.30 port 46582
Nov 23 20:52:51 compute-1 sshd-session[133478]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:52:51 compute-1 systemd[1]: session-51.scope: Deactivated successfully.
Nov 23 20:52:51 compute-1 systemd[1]: session-51.scope: Consumed 52.519s CPU time.
Nov 23 20:52:51 compute-1 systemd-logind[793]: Session 51 logged out. Waiting for processes to exit.
Nov 23 20:52:51 compute-1 systemd-logind[793]: Removed session 51.
Nov 23 20:52:51 compute-1 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.783 142158 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.784 142158 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpnjoxuln_/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.685 142272 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.689 142272 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.693 142272 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.693 142272 INFO oslo.privsep.daemon [-] privsep daemon running as pid 142272
Nov 23 20:52:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.786 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[916c196a-cb37-4057-9df5-a52daf463bf1]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 20:52:52 compute-1 ceph-mon[80135]: pgmap v335: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.280 142272 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.280 142272 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.280 142272 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 20:52:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:52.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:52.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.821 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[54865a17-05b1-4019-b6c2-f71f50a251b6]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.823 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, column=external_ids, values=({'neutron:ovn-metadata-id': '37f8a20d-4d8e-5752-b1f6-ae94c68755e0'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.841 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.847 142158 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.847 142158 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.847 142158 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.847 142158 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.847 142158 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.847 142158 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.847 142158 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.848 142158 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.848 142158 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.848 142158 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.848 142158 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.848 142158 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.848 142158 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.848 142158 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.849 142158 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.849 142158 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.849 142158 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.849 142158 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.849 142158 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.849 142158 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.849 142158 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.849 142158 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.850 142158 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.850 142158 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.850 142158 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.850 142158 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.850 142158 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.850 142158 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.850 142158 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.850 142158 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.850 142158 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.851 142158 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.851 142158 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.851 142158 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.851 142158 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.851 142158 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.851 142158 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.851 142158 DEBUG oslo_service.service [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.851 142158 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.852 142158 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.852 142158 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.852 142158 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.852 142158 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.852 142158 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.852 142158 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.852 142158 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.852 142158 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.852 142158 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.853 142158 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.853 142158 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.853 142158 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.853 142158 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.853 142158 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.853 142158 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.853 142158 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.853 142158 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.853 142158 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.854 142158 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.854 142158 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.854 142158 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.854 142158 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.854 142158 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.854 142158 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.854 142158 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.855 142158 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.855 142158 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.855 142158 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.855 142158 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.855 142158 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.855 142158 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.855 142158 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.855 142158 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.856 142158 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.856 142158 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.856 142158 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.856 142158 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.856 142158 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.856 142158 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.856 142158 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.856 142158 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.856 142158 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.857 142158 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.857 142158 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.857 142158 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.857 142158 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.857 142158 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.857 142158 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.857 142158 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.857 142158 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.857 142158 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.858 142158 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.858 142158 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.858 142158 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.858 142158 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.858 142158 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.858 142158 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.858 142158 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.858 142158 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.858 142158 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.858 142158 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.859 142158 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.859 142158 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.859 142158 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.859 142158 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.859 142158 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.859 142158 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.859 142158 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.859 142158 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.860 142158 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.860 142158 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.860 142158 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.860 142158 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.860 142158 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.860 142158 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.860 142158 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.861 142158 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.861 142158 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.861 142158 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.861 142158 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.861 142158 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.861 142158 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.861 142158 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.861 142158 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.861 142158 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.862 142158 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.862 142158 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.862 142158 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.862 142158 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.862 142158 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.862 142158 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.862 142158 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.862 142158 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.863 142158 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.863 142158 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.863 142158 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.863 142158 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.863 142158 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.863 142158 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.863 142158 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.863 142158 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.863 142158 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.864 142158 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.864 142158 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.864 142158 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.864 142158 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.864 142158 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.864 142158 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.864 142158 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.864 142158 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.864 142158 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.864 142158 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.865 142158 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.865 142158 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.865 142158 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.865 142158 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.865 142158 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.865 142158 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.865 142158 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.865 142158 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.865 142158 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.865 142158 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.866 142158 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.866 142158 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.866 142158 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.866 142158 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.866 142158 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.866 142158 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.866 142158 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.866 142158 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.866 142158 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.867 142158 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.867 142158 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.867 142158 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.867 142158 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.867 142158 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.867 142158 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.867 142158 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.867 142158 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.868 142158 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.868 142158 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.868 142158 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.868 142158 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.868 142158 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.868 142158 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.868 142158 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.868 142158 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.868 142158 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.869 142158 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.869 142158 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.869 142158 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.869 142158 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.869 142158 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.869 142158 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.869 142158 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.869 142158 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.869 142158 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.870 142158 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.870 142158 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.870 142158 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.870 142158 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.870 142158 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.870 142158 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.870 142158 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.870 142158 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.870 142158 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.870 142158 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.871 142158 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.871 142158 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.871 142158 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.871 142158 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.871 142158 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.871 142158 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.871 142158 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.871 142158 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.871 142158 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.871 142158 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.872 142158 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.872 142158 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.872 142158 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.872 142158 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.872 142158 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.872 142158 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.872 142158 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.872 142158 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.872 142158 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.872 142158 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.873 142158 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.873 142158 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.873 142158 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.873 142158 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.873 142158 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.873 142158 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.873 142158 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.873 142158 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.873 142158 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.873 142158 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.874 142158 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.874 142158 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.874 142158 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.874 142158 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.874 142158 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.874 142158 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.874 142158 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.874 142158 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.874 142158 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.875 142158 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.875 142158 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.875 142158 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.875 142158 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.875 142158 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.875 142158 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.875 142158 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.875 142158 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.875 142158 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.875 142158 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.876 142158 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.876 142158 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.876 142158 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.876 142158 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.876 142158 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.876 142158 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.876 142158 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.876 142158 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.876 142158 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.877 142158 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.877 142158 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.877 142158 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.877 142158 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.877 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.877 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.877 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.877 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.877 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.878 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.878 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.878 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.878 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.878 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.878 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.878 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.878 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.878 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.878 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.879 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.879 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.879 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.879 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.879 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.879 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.879 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.879 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.879 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.879 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.880 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.880 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.880 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.880 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.880 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.880 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.880 142158 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.880 142158 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.881 142158 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.881 142158 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 20:52:52 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.881 142158 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 23 20:52:54 compute-1 ceph-mon[80135]: pgmap v336: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Nov 23 20:52:54 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Scheduled restart job, restart counter is at 2.
Nov 23 20:52:54 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 20:52:54 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.780s CPU time.
Nov 23 20:52:54 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 20:52:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:54.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:54 compute-1 podman[142323]: 2025-11-23 20:52:54.508902274 +0000 UTC m=+0.037092244 container create 1ff959a10d68e7580d7be117c171df90d016cecac18e249b4316df720bf9ce01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 23 20:52:54 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb5a284bd58b76c132ec5c1fe7fa8b99f88ae28b2871ecbfc5ae4312bab65a48/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 20:52:54 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb5a284bd58b76c132ec5c1fe7fa8b99f88ae28b2871ecbfc5ae4312bab65a48/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 20:52:54 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb5a284bd58b76c132ec5c1fe7fa8b99f88ae28b2871ecbfc5ae4312bab65a48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 20:52:54 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb5a284bd58b76c132ec5c1fe7fa8b99f88ae28b2871ecbfc5ae4312bab65a48/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.fuxuha-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 20:52:54 compute-1 podman[142323]: 2025-11-23 20:52:54.563440625 +0000 UTC m=+0.091630595 container init 1ff959a10d68e7580d7be117c171df90d016cecac18e249b4316df720bf9ce01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Nov 23 20:52:54 compute-1 podman[142323]: 2025-11-23 20:52:54.575278852 +0000 UTC m=+0.103468822 container start 1ff959a10d68e7580d7be117c171df90d016cecac18e249b4316df720bf9ce01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 20:52:54 compute-1 bash[142323]: 1ff959a10d68e7580d7be117c171df90d016cecac18e249b4316df720bf9ce01
Nov 23 20:52:54 compute-1 podman[142323]: 2025-11-23 20:52:54.494000096 +0000 UTC m=+0.022190086 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:52:54 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 20:52:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:54.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:52:54 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 20:52:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:52:54 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 20:52:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:52:54 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 20:52:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:52:54 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 20:52:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:52:54 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 20:52:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:52:54 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 20:52:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:52:54 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 20:52:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:52:54 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 20:52:55 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:52:56 compute-1 sshd-session[142381]: Accepted publickey for zuul from 192.168.122.30 port 38926 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 20:52:56 compute-1 systemd-logind[793]: New session 52 of user zuul.
Nov 23 20:52:56 compute-1 systemd[1]: Started Session 52 of User zuul.
Nov 23 20:52:56 compute-1 sshd-session[142381]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:52:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:56.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:56.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:57 compute-1 python3.9[142534]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:52:57 compute-1 ceph-mon[80135]: pgmap v337: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Nov 23 20:52:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:58.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:52:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:52:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:58.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:52:58 compute-1 sudo[142689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwqnlwrmreihmfhyksraeetjahktpbso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931178.273787-63-211222618628908/AnsiballZ_command.py'
Nov 23 20:52:58 compute-1 sudo[142689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:52:58 compute-1 podman[142691]: 2025-11-23 20:52:58.836507689 +0000 UTC m=+0.076146820 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 23 20:52:58 compute-1 ceph-mon[80135]: pgmap v338: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Nov 23 20:52:58 compute-1 python3.9[142692]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:52:58 compute-1 sudo[142689]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:59 compute-1 sudo[142756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:52:59 compute-1 sudo[142756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:52:59 compute-1 sudo[142756]: pam_unix(sudo:session): session closed for user root
Nov 23 20:52:59 compute-1 ceph-mon[80135]: pgmap v339: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Nov 23 20:53:00 compute-1 sudo[142908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywlzveatztkbfehwphgqbyhdelhvynzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931179.513355-96-211225608316775/AnsiballZ_systemd_service.py'
Nov 23 20:53:00 compute-1 sudo[142908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:00.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:00 compute-1 python3.9[142910]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 20:53:00 compute-1 systemd[1]: Reloading.
Nov 23 20:53:00 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:53:00 compute-1 systemd-rc-local-generator[142937]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:53:00 compute-1 systemd-sysv-generator[142940]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:53:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:00.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:00 compute-1 sudo[142908]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:00 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 20:53:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:00 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:53:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:00 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 20:53:01 compute-1 python3.9[143096]: ansible-ansible.builtin.service_facts Invoked
Nov 23 20:53:01 compute-1 network[143113]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 20:53:01 compute-1 network[143114]: 'network-scripts' will be removed from distribution in near future.
Nov 23 20:53:01 compute-1 network[143115]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 20:53:01 compute-1 ceph-mon[80135]: pgmap v340: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 426 B/s wr, 169 op/s
Nov 23 20:53:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:02.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:02.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205303 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 20:53:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:53:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:04.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:53:04 compute-1 ceph-mon[80135]: pgmap v341: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Nov 23 20:53:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:53:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:04.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:04 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 20:53:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:04 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 20:53:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:04 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:53:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:05 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 20:53:05 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:53:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:06.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:06 compute-1 ceph-mon[80135]: pgmap v342: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 20:53:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:06.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205307 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 20:53:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [NOTICE] 326/205307 (4) : haproxy version is 2.3.17-d1c9119
Nov 23 20:53:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [NOTICE] 326/205307 (4) : path to executable is /usr/local/sbin/haproxy
Nov 23 20:53:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [ALERT] 326/205307 (4) : backend 'backend' has no server available!
Nov 23 20:53:07 compute-1 ceph-mon[80135]: pgmap v343: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 20:53:08 compute-1 sudo[143378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agwmwheknpefyviaklzuviamqcvtdyve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931188.0732665-153-226133279099767/AnsiballZ_systemd_service.py'
Nov 23 20:53:08 compute-1 sudo[143378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:53:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:08.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:53:08 compute-1 python3.9[143380]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:53:08 compute-1 sudo[143378]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:09 compute-1 sudo[143531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eubsrbemoqgccnypgdusnbwdlfmrmkwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931188.842829-153-118509886278675/AnsiballZ_systemd_service.py'
Nov 23 20:53:09 compute-1 sudo[143531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:09 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 20:53:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:09 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 20:53:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:09 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:53:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:53:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:09.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:53:09 compute-1 python3.9[143533]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:53:09 compute-1 sudo[143531]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:10 compute-1 sudo[143685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcslfqkbrkbyxwlaxzeincfmzelmhkmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931189.721236-153-248185863347317/AnsiballZ_systemd_service.py'
Nov 23 20:53:10 compute-1 sudo[143685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:10 compute-1 python3.9[143687]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:53:10 compute-1 ceph-mon[80135]: pgmap v344: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 20:53:10 compute-1 sudo[143685]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:10 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:53:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:10.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:10 compute-1 sudo[143838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njbaujevlkoiqsrdkstyyyayrenfytxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931190.5704768-153-279095485887408/AnsiballZ_systemd_service.py'
Nov 23 20:53:10 compute-1 sudo[143838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:11 compute-1 python3.9[143840]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:53:11 compute-1 sudo[143838]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:11.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:11 compute-1 sudo[143992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvetetvnftowepfparsdvkbckcbwuxow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931191.3443394-153-193648347431247/AnsiballZ_systemd_service.py'
Nov 23 20:53:11 compute-1 sudo[143992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:11 compute-1 python3.9[143994]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:53:11 compute-1 sudo[143992]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:12 compute-1 ceph-mon[80135]: pgmap v345: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Nov 23 20:53:12 compute-1 sudo[144145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjlejkbnceoysmnmarodsyugauydtlhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931192.0984845-153-179456080267136/AnsiballZ_systemd_service.py'
Nov 23 20:53:12 compute-1 sudo[144145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:53:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:12.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:53:12 compute-1 python3.9[144147]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:53:12 compute-1 sudo[144145]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:13 compute-1 sudo[144298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txyzdjztqpybqkndmyxvhdsobmqyjnah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931192.9021146-153-214375190614338/AnsiballZ_systemd_service.py'
Nov 23 20:53:13 compute-1 sudo[144298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:13.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:13 compute-1 python3.9[144300]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:53:13 compute-1 sudo[144298]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:14 compute-1 ceph-mon[80135]: pgmap v346: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 23 20:53:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:53:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:14.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:53:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:15.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4fa4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:15 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:53:15 compute-1 ceph-mon[80135]: pgmap v347: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f80000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:16.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:17 compute-1 sudo[144469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsjjnpjzdqzjaaxsbzelvcrujpjqqixj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931196.8773801-309-148938742010331/AnsiballZ_file.py'
Nov 23 20:53:17 compute-1 sudo[144469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:53:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:17.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:53:17 compute-1 python3.9[144471]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:53:17 compute-1 sudo[144469]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:17 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:17 compute-1 sudo[144622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmrebfgfiwebfbsimenpklpebuyvcsqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931197.6008322-309-30987728136871/AnsiballZ_file.py'
Nov 23 20:53:17 compute-1 ceph-mon[80135]: pgmap v348: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 20:53:17 compute-1 sudo[144622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:17 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f88000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205317 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 20:53:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:17 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:18 compute-1 python3.9[144624]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:53:18 compute-1 sudo[144622]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:18 compute-1 sudo[144774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrelwuuyqpeqydoknvnxjxxgzdelosav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931198.2504537-309-145654571120702/AnsiballZ_file.py'
Nov 23 20:53:18 compute-1 sudo[144774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:18 : epoch 69237426 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 20:53:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:18 : epoch 69237426 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:53:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:18 : epoch 69237426 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:53:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:18.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:18 compute-1 python3.9[144776]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:53:18 compute-1 sudo[144774]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:53:19 compute-1 sudo[144926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uodljcvojjugghkfaelkktfbkijhljpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931198.9325564-309-77405091009893/AnsiballZ_file.py'
Nov 23 20:53:19 compute-1 sudo[144926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:19 compute-1 sudo[144929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:53:19 compute-1 sudo[144929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:53:19 compute-1 sudo[144929]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:19 compute-1 python3.9[144928]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:53:19 compute-1 sudo[144926]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:53:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:19.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:53:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:19 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f800016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:19 compute-1 podman[145043]: 2025-11-23 20:53:19.641527188 +0000 UTC m=+0.054556210 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 23 20:53:19 compute-1 sudo[145123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dstoiczieuhuinrvdrqoilonncjbvbfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931199.482264-309-247128587679600/AnsiballZ_file.py'
Nov 23 20:53:19 compute-1 sudo[145123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:19 compute-1 python3.9[145125]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:53:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:19 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:19 compute-1 sudo[145123]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:19 compute-1 ceph-mon[80135]: pgmap v349: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 20:53:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:19 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f88001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:20 compute-1 sudo[145275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gotmljrekrgffssbcmxidhfarqmhcahz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931200.021474-309-66231356141924/AnsiballZ_file.py'
Nov 23 20:53:20 compute-1 sudo[145275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:20 compute-1 python3.9[145277]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:53:20 compute-1 sudo[145275]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:53:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:20.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:53:20 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:53:20 compute-1 sudo[145427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akdxiqluxxvcztzzzjmaaighiqksfvko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931200.6567945-309-81413616163269/AnsiballZ_file.py'
Nov 23 20:53:20 compute-1 sudo[145427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:21 compute-1 python3.9[145429]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:53:21 compute-1 sudo[145427]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:21.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:21 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:21 : epoch 69237426 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:53:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:21 : epoch 69237426 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:53:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:21 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:21 compute-1 ceph-mon[80135]: pgmap v350: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Nov 23 20:53:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:21 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f800016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:22.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:22 compute-1 sudo[145580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoyckvmyohqrgabjtbcklucbuyqopjmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931202.4177542-459-201301842760110/AnsiballZ_file.py'
Nov 23 20:53:22 compute-1 sudo[145580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:22 compute-1 python3.9[145582]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:53:22 compute-1 sudo[145580]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:23 compute-1 sudo[145732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpgshfaondofppyzqcrftttelagjsgkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931203.0301135-459-147014894561290/AnsiballZ_file.py'
Nov 23 20:53:23 compute-1 sudo[145732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:53:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:23.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:53:23 compute-1 python3.9[145734]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:53:23 compute-1 sudo[145732]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:23 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f88001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:23 compute-1 sudo[145885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-picyntdpcissgkhxhlkqwiwoswazwbmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931203.6280289-459-215785289200724/AnsiballZ_file.py'
Nov 23 20:53:23 compute-1 sudo[145885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:23 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:23 compute-1 ceph-mon[80135]: pgmap v351: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Nov 23 20:53:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:23 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:24 compute-1 python3.9[145887]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:53:24 compute-1 sudo[145885]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:24 compute-1 sudo[146037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttdzysuwfqxuychacagtnwevffxllxpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931204.2475095-459-49634805289574/AnsiballZ_file.py'
Nov 23 20:53:24 compute-1 sudo[146037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:53:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:24.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:53:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:24 : epoch 69237426 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 20:53:24 compute-1 python3.9[146039]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:53:24 compute-1 sudo[146037]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:25 compute-1 sudo[146189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muvkzooqatescyvobqkkptgvdaylkdti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931204.94339-459-110700748268927/AnsiballZ_file.py'
Nov 23 20:53:25 compute-1 sudo[146189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:25.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:25 compute-1 python3.9[146191]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:53:25 compute-1 sudo[146189]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:25 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f800016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:25 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:53:25 compute-1 sudo[146342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqjhpexxftfgxvrnqpxvgaloymmzupao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931205.5883696-459-279664531180085/AnsiballZ_file.py'
Nov 23 20:53:25 compute-1 sudo[146342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:25 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f88001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:25 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:25 compute-1 ceph-mon[80135]: pgmap v352: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.6 KiB/s wr, 5 op/s
Nov 23 20:53:26 compute-1 python3.9[146344]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:53:26 compute-1 sudo[146342]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:26 compute-1 sudo[146494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umguvgmcojqebymrivqhmrpflawdxtho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931206.181949-459-244154976625989/AnsiballZ_file.py'
Nov 23 20:53:26 compute-1 sudo[146494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:26.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:26 compute-1 python3.9[146496]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:53:26 compute-1 sudo[146494]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:27.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:27 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:27 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f80002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:27 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f88002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:27 compute-1 ceph-mon[80135]: pgmap v353: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 20:53:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:28.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:28 compute-1 sudo[146647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onyaukwxdjohxrtuwcwjvrbbvmqvebfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931208.5053744-612-235424860765906/AnsiballZ_command.py'
Nov 23 20:53:28 compute-1 sudo[146647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:28 compute-1 python3.9[146649]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:53:29 compute-1 sudo[146647]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:29 compute-1 podman[146652]: 2025-11-23 20:53:29.131517853 +0000 UTC m=+0.088926598 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Nov 23 20:53:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:29.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:29 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:29 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:29 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f80002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:30 compute-1 python3.9[146828]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 23 20:53:30 compute-1 ceph-mon[80135]: pgmap v354: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 20:53:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:30.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:30 compute-1 sudo[146978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiiteagptfwlwbezyxijnotmfwnvilzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931210.4007607-666-84894962050536/AnsiballZ_systemd_service.py'
Nov 23 20:53:30 compute-1 sudo[146978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:30 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:53:31 compute-1 python3.9[146980]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 20:53:31 compute-1 systemd[1]: Reloading.
Nov 23 20:53:31 compute-1 systemd-rc-local-generator[147002]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:53:31 compute-1 systemd-sysv-generator[147006]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:53:31 compute-1 sudo[146978]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:31.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205331 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 20:53:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:31 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f88002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:31 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:31 compute-1 sudo[147166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjzeaqkodljrvkfaxzrqadweqjumclxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931211.6726635-690-231970191696868/AnsiballZ_command.py'
Nov 23 20:53:31 compute-1 sudo[147166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:31 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:32 compute-1 ceph-mon[80135]: pgmap v355: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Nov 23 20:53:32 compute-1 python3.9[147168]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:53:32 compute-1 sudo[147166]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:32 compute-1 sudo[147319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uskitjtdvsfzjgcqbbygrljvnlmlmhzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931212.3602858-690-258369091313340/AnsiballZ_command.py'
Nov 23 20:53:32 compute-1 sudo[147319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:32.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:32 compute-1 python3.9[147321]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:53:32 compute-1 sudo[147319]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:33 compute-1 sudo[147472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwnneqqmtdcyoaviirnxbmqmxsthizej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931212.9600616-690-83649117442852/AnsiballZ_command.py'
Nov 23 20:53:33 compute-1 sudo[147472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:33 compute-1 python3.9[147474]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:53:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:33.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:33 compute-1 sudo[147472]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:33 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f80002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:33 compute-1 sudo[147626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siweszwmodsmakgdccwfrtxfuuklzbde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931213.5674365-690-163514426424270/AnsiballZ_command.py'
Nov 23 20:53:33 compute-1 sudo[147626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:33 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f80002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:33 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:33 compute-1 python3.9[147628]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:53:34 compute-1 sudo[147626]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:34 compute-1 ceph-mon[80135]: pgmap v356: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 511 B/s wr, 1 op/s
Nov 23 20:53:34 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:53:34 compute-1 sudo[147781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqjwnopuncyusajlfiqoqzrosumtvcec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931214.121068-690-264085342598284/AnsiballZ_command.py'
Nov 23 20:53:34 compute-1 sudo[147781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:34 compute-1 sshd-session[147629]: Invalid user gwei from 92.118.39.92 port 33078
Nov 23 20:53:34 compute-1 sshd-session[147629]: Connection closed by invalid user gwei 92.118.39.92 port 33078 [preauth]
Nov 23 20:53:34 compute-1 python3.9[147783]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:53:34 compute-1 sudo[147781]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:34.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:34 compute-1 sudo[147934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqmykdwxljfvqxkvocjlxqbuoattjtok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931214.7163756-690-108536333909623/AnsiballZ_command.py'
Nov 23 20:53:34 compute-1 sudo[147934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:35 compute-1 python3.9[147936]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:53:35 compute-1 sudo[147934]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:35.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:35 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:35 compute-1 sudo[148088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmszwpfqdcjijncovtwfplpljrelpjxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931215.342102-690-208140547318722/AnsiballZ_command.py'
Nov 23 20:53:35 compute-1 sudo[148088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:35 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:53:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:35 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f80002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:35 compute-1 python3.9[148090]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:53:35 compute-1 sudo[148088]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:35 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f80002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:36 compute-1 ceph-mon[80135]: pgmap v357: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 511 B/s wr, 1 op/s
Nov 23 20:53:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:36.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:37.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:37 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:37 : epoch 69237426 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 20:53:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:37 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:37 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:38.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:38 compute-1 ceph-mon[80135]: pgmap v358: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:53:39 compute-1 sudo[148117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:53:39 compute-1 sudo[148117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:53:39 compute-1 sudo[148117]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:39.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:39 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f88003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:39 compute-1 ceph-mon[80135]: pgmap v359: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:53:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:39 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:39 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f80003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:40 compute-1 sudo[148268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhhzxkykkcaynmooykjvxuowcyhyvgxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931219.7484057-852-204704506351757/AnsiballZ_getent.py'
Nov 23 20:53:40 compute-1 sudo[148268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:40 compute-1 python3.9[148270]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 23 20:53:40 compute-1 sudo[148268]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:40.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:40 : epoch 69237426 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 20:53:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:40 : epoch 69237426 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:53:40 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:53:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:40 : epoch 69237426 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:53:41 compute-1 sudo[148421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaponqbaopscehlxerdklunyyspungbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931220.726297-876-16452147198511/AnsiballZ_group.py'
Nov 23 20:53:41 compute-1 sudo[148421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:41 compute-1 python3.9[148423]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 23 20:53:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:41.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:41 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:41 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f88003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:41 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:42 compute-1 groupadd[148424]: group added to /etc/group: name=libvirt, GID=42473
Nov 23 20:53:42 compute-1 ceph-mon[80135]: pgmap v360: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Nov 23 20:53:42 compute-1 groupadd[148424]: group added to /etc/gshadow: name=libvirt
Nov 23 20:53:42 compute-1 groupadd[148424]: new group: name=libvirt, GID=42473
Nov 23 20:53:42 compute-1 sudo[148421]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:42.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:43 compute-1 sudo[148580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcraxdkoqrluaatfdnzcrlixthyimwds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931222.8110797-900-65949832881348/AnsiballZ_user.py'
Nov 23 20:53:43 compute-1 sudo[148580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:43.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:43 compute-1 python3.9[148582]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 23 20:53:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:43 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f80003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:43 compute-1 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 20:53:43 compute-1 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 20:53:43 compute-1 useradd[148585]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Nov 23 20:53:43 compute-1 ceph-mon[80135]: pgmap v361: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 20:53:43 compute-1 sudo[148580]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:43 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:43 : epoch 69237426 : compute-1 : ganesha.nfsd-2[reaper] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000010:nfs.cephfs.0: -2
Nov 23 20:53:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:43 : epoch 69237426 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 20:53:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:43 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:44 compute-1 sudo[148742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnqdotfbzocqmircjdqjavaoeafgdffb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931224.21904-933-260568820432968/AnsiballZ_setup.py'
Nov 23 20:53:44 compute-1 sudo[148742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:44.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:44 compute-1 python3.9[148744]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 20:53:44 compute-1 sudo[148742]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:45 compute-1 sudo[148826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isfxuairrmcmuygmbvzyrznqhkmymlax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931224.21904-933-260568820432968/AnsiballZ_dnf.py'
Nov 23 20:53:45 compute-1 sudo[148826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:53:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:45.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:45 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:45 compute-1 python3.9[148828]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 20:53:45 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:53:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:45 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:45 compute-1 ceph-mon[80135]: pgmap v362: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 20:53:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:45 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:46.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:47.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:47 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4fa4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:47 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f88003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:47 compute-1 ceph-mon[80135]: pgmap v363: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 20:53:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:47 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f74000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:53:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:48.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:53:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:53:49 compute-1 sudo[148841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:53:49 compute-1 sudo[148841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:53:49 compute-1 sudo[148841]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:49 compute-1 sudo[148866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 20:53:49 compute-1 sudo[148866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:53:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205349 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 20:53:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:53:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:49.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:53:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:49 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:49 compute-1 sudo[148866]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:49 compute-1 podman[148923]: 2025-11-23 20:53:49.889576275 +0000 UTC m=+0.051077850 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 20:53:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:49 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4fa4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:53:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:49 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f88003c60 fd 39 proxy ignored for local
Nov 23 20:53:49 compute-1 kernel: ganesha.nfsd[144332]: segfault at 50 ip 00007f505110132e sp 00007f50167fb210 error 4 in libntirpc.so.5.8[7f50510e6000+2c000] likely on CPU 3 (core 0, socket 3)
Nov 23 20:53:49 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 20:53:49 compute-1 ceph-mon[80135]: pgmap v364: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 20:53:49 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:53:49 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 20:53:49 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:53:49 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:53:49 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 20:53:49 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 20:53:49 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:53:50 compute-1 systemd[1]: Started Process Core Dump (PID 148943/UID 0).
Nov 23 20:53:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:50.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:50 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:53:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:53:51.048 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 20:53:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:53:51.048 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 20:53:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:53:51.048 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 20:53:51 compute-1 systemd-coredump[148944]: Process 142342 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 45:
                                                    #0  0x00007f505110132e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 23 20:53:51 compute-1 systemd[1]: systemd-coredump@2-148943-0.service: Deactivated successfully.
Nov 23 20:53:51 compute-1 systemd[1]: systemd-coredump@2-148943-0.service: Consumed 1.198s CPU time.
Nov 23 20:53:51 compute-1 podman[148950]: 2025-11-23 20:53:51.331988459 +0000 UTC m=+0.028473826 container died 1ff959a10d68e7580d7be117c171df90d016cecac18e249b4316df720bf9ce01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 20:53:51 compute-1 systemd[1]: var-lib-containers-storage-overlay-cb5a284bd58b76c132ec5c1fe7fa8b99f88ae28b2871ecbfc5ae4312bab65a48-merged.mount: Deactivated successfully.
Nov 23 20:53:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:51.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:51 compute-1 podman[148950]: 2025-11-23 20:53:51.490224036 +0000 UTC m=+0.186709393 container remove 1ff959a10d68e7580d7be117c171df90d016cecac18e249b4316df720bf9ce01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 23 20:53:51 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Main process exited, code=exited, status=139/n/a
Nov 23 20:53:51 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Failed with result 'exit-code'.
Nov 23 20:53:51 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.456s CPU time.
Nov 23 20:53:52 compute-1 ceph-mon[80135]: pgmap v365: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 20:53:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:52.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:53.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:53 compute-1 sshd-session[148993]: Invalid user local from 118.145.189.160 port 50230
Nov 23 20:53:54 compute-1 ceph-mon[80135]: pgmap v366: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Nov 23 20:53:54 compute-1 sshd-session[148993]: Received disconnect from 118.145.189.160 port 50230:11: Bye Bye [preauth]
Nov 23 20:53:54 compute-1 sshd-session[148993]: Disconnected from invalid user local 118.145.189.160 port 50230 [preauth]
Nov 23 20:53:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:54.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:54 compute-1 sudo[149037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 20:53:54 compute-1 sudo[149037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:53:54 compute-1 sudo[149037]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:55.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:55 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:53:55 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:53:55 compute-1 ceph-mon[80135]: pgmap v367: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Nov 23 20:53:55 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:53:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205355 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 20:53:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:53:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:56.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:53:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:57.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:57 compute-1 ceph-mon[80135]: pgmap v368: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:53:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:58.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:53:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:53:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:59.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:53:59 compute-1 sudo[149196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:53:59 compute-1 sudo[149196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:53:59 compute-1 sudo[149196]: pam_unix(sudo:session): session closed for user root
Nov 23 20:53:59 compute-1 podman[149220]: 2025-11-23 20:53:59.565579614 +0000 UTC m=+0.079868824 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 20:53:59 compute-1 ceph-mon[80135]: pgmap v369: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:54:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:00.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:00 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:54:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:01.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:01 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Scheduled restart job, restart counter is at 3.
Nov 23 20:54:01 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 20:54:01 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.456s CPU time.
Nov 23 20:54:01 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 20:54:01 compute-1 ceph-mon[80135]: pgmap v370: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:54:02 compute-1 podman[149299]: 2025-11-23 20:54:02.071794043 +0000 UTC m=+0.079227608 container create 976578fc2e77df5184e890816fcd8c1a37386781e43ef95d07c87ac1de6451e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 20:54:02 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b744fd0527429d8e05e010b7cb984d8007475a9014869ff0fbfe766f4a3c302a/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 20:54:02 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b744fd0527429d8e05e010b7cb984d8007475a9014869ff0fbfe766f4a3c302a/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 20:54:02 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b744fd0527429d8e05e010b7cb984d8007475a9014869ff0fbfe766f4a3c302a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 20:54:02 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b744fd0527429d8e05e010b7cb984d8007475a9014869ff0fbfe766f4a3c302a/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.fuxuha-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 20:54:02 compute-1 podman[149299]: 2025-11-23 20:54:02.015763576 +0000 UTC m=+0.023197171 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:54:02 compute-1 podman[149299]: 2025-11-23 20:54:02.125807798 +0000 UTC m=+0.133241383 container init 976578fc2e77df5184e890816fcd8c1a37386781e43ef95d07c87ac1de6451e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 20:54:02 compute-1 podman[149299]: 2025-11-23 20:54:02.130493919 +0000 UTC m=+0.137927484 container start 976578fc2e77df5184e890816fcd8c1a37386781e43ef95d07c87ac1de6451e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 23 20:54:02 compute-1 bash[149299]: 976578fc2e77df5184e890816fcd8c1a37386781e43ef95d07c87ac1de6451e3
Nov 23 20:54:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:02 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 20:54:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:02 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 20:54:02 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 20:54:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:02 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 20:54:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:02 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 20:54:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:02 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 20:54:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:02 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 20:54:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:02 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 20:54:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:02 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 20:54:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:02.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:54:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:03.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:54:04 compute-1 ceph-mon[80135]: pgmap v371: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:54:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:54:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:04.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:05.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:05 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:54:06 compute-1 ceph-mon[80135]: pgmap v372: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:54:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:06.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:54:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:07.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:54:08 compute-1 ceph-mon[80135]: pgmap v373: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:54:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:08 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 20:54:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:08 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:54:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:08.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:54:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:09.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:54:10 compute-1 sshd-session[149365]: Received disconnect from 102.176.81.29 port 36946:11: Bye Bye [preauth]
Nov 23 20:54:10 compute-1 sshd-session[149365]: Disconnected from authenticating user root 102.176.81.29 port 36946 [preauth]
Nov 23 20:54:10 compute-1 ceph-mon[80135]: pgmap v374: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:54:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:54:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:10.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:54:10 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:54:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:11.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205411 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 20:54:12 compute-1 ceph-mon[80135]: pgmap v375: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Nov 23 20:54:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:54:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:12.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:54:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:13.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:13 compute-1 ceph-mon[80135]: pgmap v376: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Nov 23 20:54:14 compute-1 kernel: SELinux:  Converting 2772 SID table entries...
Nov 23 20:54:14 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 20:54:14 compute-1 kernel: SELinux:  policy capability open_perms=1
Nov 23 20:54:14 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 20:54:14 compute-1 kernel: SELinux:  policy capability always_check_network=0
Nov 23 20:54:14 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 20:54:14 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 20:54:14 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 20:54:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 20:54:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 20:54:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 20:54:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 20:54:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 20:54:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 20:54:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 20:54:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:54:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:54:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:54:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 20:54:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:54:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 20:54:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 20:54:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 20:54:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 20:54:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 20:54:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 20:54:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 20:54:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 20:54:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 20:54:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 20:54:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 20:54:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 20:54:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 20:54:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 20:54:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 20:54:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:14.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:54:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:15.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:54:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:15 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb800000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:15 compute-1 ceph-mon[80135]: pgmap v377: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 20:54:15 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:54:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:15 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e80016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:15 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7dc000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:16.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:54:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:17.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:54:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:17 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb800000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:17 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7fc001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:17 compute-1 ceph-mon[80135]: pgmap v378: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 853 B/s wr, 2 op/s
Nov 23 20:54:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205417 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 20:54:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:17 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:18.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:19 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:54:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:54:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:19.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:54:19 compute-1 sudo[149398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:54:19 compute-1 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 23 20:54:19 compute-1 sudo[149398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:54:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:19 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7dc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:19 compute-1 sudo[149398]: pam_unix(sudo:session): session closed for user root
Nov 23 20:54:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:19 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb800000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:19 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:20 compute-1 ceph-mon[80135]: pgmap v379: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 853 B/s wr, 2 op/s
Nov 23 20:54:20 compute-1 podman[149423]: 2025-11-23 20:54:20.638711593 +0000 UTC m=+0.051091475 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 23 20:54:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:54:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:20.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:54:20 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:54:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:21 : epoch 6923746a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 20:54:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:21.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:21 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:21 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7dc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:22 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb800000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:22 compute-1 ceph-mon[80135]: pgmap v380: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:54:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:22.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:23.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:23 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:23 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:24 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7dc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:24 : epoch 6923746a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 20:54:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:24 : epoch 6923746a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:54:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:24 : epoch 6923746a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:54:24 compute-1 ceph-mon[80135]: pgmap v381: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 1 op/s
Nov 23 20:54:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:24.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:24 compute-1 kernel: SELinux:  Converting 2772 SID table entries...
Nov 23 20:54:24 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 20:54:24 compute-1 kernel: SELinux:  policy capability open_perms=1
Nov 23 20:54:24 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 20:54:24 compute-1 kernel: SELinux:  policy capability always_check_network=0
Nov 23 20:54:24 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 20:54:24 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 20:54:24 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 20:54:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:25.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:25 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8000091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:25 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:54:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:25 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:26 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:26 compute-1 sshd-session[149449]: Invalid user debian from 185.156.73.233 port 41778
Nov 23 20:54:26 compute-1 sshd-session[149449]: Connection closed by invalid user debian 185.156.73.233 port 41778 [preauth]
Nov 23 20:54:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:54:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:26.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:54:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:27 : epoch 6923746a : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 20:54:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:27.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:27 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7dc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:27 compute-1 ceph-mon[80135]: pgmap v382: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:54:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:27 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8000091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:28 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:28.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:28 compute-1 ceph-mon[80135]: pgmap v383: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 852 B/s wr, 3 op/s
Nov 23 20:54:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:54:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:29.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:54:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:29 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:29 compute-1 ceph-mon[80135]: pgmap v384: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 852 B/s wr, 3 op/s
Nov 23 20:54:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:29 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7dc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:30 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb800009ec0 fd 38 proxy ignored for local
Nov 23 20:54:30 compute-1 kernel: ganesha.nfsd[149379]: segfault at 50 ip 00007fb8af74032e sp 00007fb87effc210 error 4 in libntirpc.so.5.8[7fb8af725000+2c000] likely on CPU 2 (core 0, socket 2)
Nov 23 20:54:30 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 20:54:30 compute-1 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 23 20:54:30 compute-1 systemd[1]: Started Process Core Dump (PID 149457/UID 0).
Nov 23 20:54:30 compute-1 podman[149458]: 2025-11-23 20:54:30.175644031 +0000 UTC m=+0.107979231 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 23 20:54:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:30.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:30 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:54:31 compute-1 systemd-coredump[149459]: Process 149318 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 41:
                                                    #0  0x00007fb8af74032e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 23 20:54:31 compute-1 systemd[1]: systemd-coredump@3-149457-0.service: Deactivated successfully.
Nov 23 20:54:31 compute-1 podman[149489]: 2025-11-23 20:54:31.303978189 +0000 UTC m=+0.028534907 container died 976578fc2e77df5184e890816fcd8c1a37386781e43ef95d07c87ac1de6451e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 23 20:54:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:31.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:31 compute-1 systemd[1]: var-lib-containers-storage-overlay-b744fd0527429d8e05e010b7cb984d8007475a9014869ff0fbfe766f4a3c302a-merged.mount: Deactivated successfully.
Nov 23 20:54:31 compute-1 podman[149489]: 2025-11-23 20:54:31.714737156 +0000 UTC m=+0.439293844 container remove 976578fc2e77df5184e890816fcd8c1a37386781e43ef95d07c87ac1de6451e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 20:54:31 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Main process exited, code=exited, status=139/n/a
Nov 23 20:54:31 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Failed with result 'exit-code'.
Nov 23 20:54:31 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.252s CPU time.
Nov 23 20:54:32 compute-1 ceph-mon[80135]: pgmap v385: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 20:54:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:32.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:32 compute-1 sshd-session[149533]: Invalid user solv from 161.35.133.66 port 38724
Nov 23 20:54:32 compute-1 sshd-session[149533]: Connection closed by invalid user solv 161.35.133.66 port 38724 [preauth]
Nov 23 20:54:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:33.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205433 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 20:54:33 compute-1 ceph-mon[80135]: pgmap v386: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:54:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:54:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:34.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:35.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:35 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:54:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205436 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 20:54:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:36.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:37.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:38 compute-1 ceph-mon[80135]: pgmap v387: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:54:38 compute-1 ceph-mon[80135]: pgmap v388: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s
Nov 23 20:54:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:38.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:39.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:39 compute-1 sudo[150672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:54:39 compute-1 sudo[150672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:54:39 compute-1 sudo[150672]: pam_unix(sudo:session): session closed for user root
Nov 23 20:54:39 compute-1 ceph-mon[80135]: pgmap v389: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s
Nov 23 20:54:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:40.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:40 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:54:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:41.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:41 compute-1 ceph-mon[80135]: pgmap v390: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 170 B/s wr, 0 op/s
Nov 23 20:54:42 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Scheduled restart job, restart counter is at 4.
Nov 23 20:54:42 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 20:54:42 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.252s CPU time.
Nov 23 20:54:42 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 20:54:42 compute-1 podman[152551]: 2025-11-23 20:54:42.511988988 +0000 UTC m=+0.023003552 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:54:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:42.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:43 compute-1 podman[152551]: 2025-11-23 20:54:43.274855001 +0000 UTC m=+0.785869545 container create f2f2a3dd8fa50cb909b3976216c43e6961fdaa3c816f16d084b16b89ca08fb7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 23 20:54:43 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ffb40541bf744548711f283b1247279c9fdfd60049305220ba9ccbfd0bc0820/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 20:54:43 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ffb40541bf744548711f283b1247279c9fdfd60049305220ba9ccbfd0bc0820/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 20:54:43 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ffb40541bf744548711f283b1247279c9fdfd60049305220ba9ccbfd0bc0820/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 20:54:43 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ffb40541bf744548711f283b1247279c9fdfd60049305220ba9ccbfd0bc0820/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.fuxuha-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 20:54:43 compute-1 podman[152551]: 2025-11-23 20:54:43.457650975 +0000 UTC m=+0.968665599 container init f2f2a3dd8fa50cb909b3976216c43e6961fdaa3c816f16d084b16b89ca08fb7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 23 20:54:43 compute-1 podman[152551]: 2025-11-23 20:54:43.464294949 +0000 UTC m=+0.975309523 container start f2f2a3dd8fa50cb909b3976216c43e6961fdaa3c816f16d084b16b89ca08fb7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 20:54:43 compute-1 bash[152551]: f2f2a3dd8fa50cb909b3976216c43e6961fdaa3c816f16d084b16b89ca08fb7f
Nov 23 20:54:43 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 20:54:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:43 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 20:54:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:43 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 20:54:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:43.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:43 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 20:54:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:43 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 20:54:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:43 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 20:54:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:43 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 20:54:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:43 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 20:54:43 compute-1 ceph-mon[80135]: pgmap v391: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:54:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:43 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 20:54:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:44.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:45.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:45 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:54:45 compute-1 ceph-mon[80135]: pgmap v392: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:54:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:54:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:46.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:54:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:47.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:47 compute-1 ceph-mon[80135]: pgmap v393: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:54:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:54:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:48.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:54:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:54:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:49.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:49 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 20:54:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:49 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:54:49 compute-1 ceph-mon[80135]: pgmap v394: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:54:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:50.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:50 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:54:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:54:51.048 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 20:54:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:54:51.049 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 20:54:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:54:51.049 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 20:54:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:51.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:51 compute-1 podman[159000]: 2025-11-23 20:54:51.628712862 +0000 UTC m=+0.043882007 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 23 20:54:51 compute-1 ceph-mon[80135]: pgmap v395: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 20:54:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:54:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:52.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:54:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:53.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:53 compute-1 ceph-mon[80135]: pgmap v396: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 20:54:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:54:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:54.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:54:55 compute-1 sudo[161344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:54:55 compute-1 sudo[161344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:54:55 compute-1 sudo[161344]: pam_unix(sudo:session): session closed for user root
Nov 23 20:54:55 compute-1 sudo[161417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 23 20:54:55 compute-1 sudo[161417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:54:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:55.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:55 compute-1 podman[161850]: 2025-11-23 20:54:55.712917656 +0000 UTC m=+0.084034186 container exec e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 20:54:55 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:54:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 20:54:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 20:54:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 20:54:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 20:54:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 20:54:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 20:54:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 20:54:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:54:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:54:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:54:55 compute-1 podman[161850]: 2025-11-23 20:54:55.807348702 +0000 UTC m=+0.178465222 container exec_died e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 20:54:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 20:54:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:54:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 20:54:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 20:54:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 20:54:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 20:54:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 20:54:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 20:54:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 20:54:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 20:54:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 20:54:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 20:54:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 20:54:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 20:54:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 20:54:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 20:54:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 20:54:56 compute-1 ceph-mon[80135]: pgmap v397: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 20:54:56 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 23 20:54:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:56 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae18000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:56 compute-1 podman[162347]: 2025-11-23 20:54:56.248076442 +0000 UTC m=+0.059503594 container exec 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 20:54:56 compute-1 podman[162347]: 2025-11-23 20:54:56.25948602 +0000 UTC m=+0.070913192 container exec_died 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 20:54:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:56 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae00001970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:56 compute-1 podman[162643]: 2025-11-23 20:54:56.563198532 +0000 UTC m=+0.048198830 container exec f2f2a3dd8fa50cb909b3976216c43e6961fdaa3c816f16d084b16b89ca08fb7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 23 20:54:56 compute-1 podman[162643]: 2025-11-23 20:54:56.5761716 +0000 UTC m=+0.061171898 container exec_died f2f2a3dd8fa50cb909b3976216c43e6961fdaa3c816f16d084b16b89ca08fb7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True)
Nov 23 20:54:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:54:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:56.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:54:56 compute-1 podman[162854]: 2025-11-23 20:54:56.769095469 +0000 UTC m=+0.044833172 container exec 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei)
Nov 23 20:54:56 compute-1 podman[162854]: 2025-11-23 20:54:56.779135411 +0000 UTC m=+0.054873094 container exec_died 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei)
Nov 23 20:54:56 compute-1 podman[163068]: 2025-11-23 20:54:56.965099648 +0000 UTC m=+0.049947316 container exec 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, version=2.2.4, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, name=keepalived, description=keepalived for Ceph, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived)
Nov 23 20:54:56 compute-1 podman[163068]: 2025-11-23 20:54:56.976191548 +0000 UTC m=+0.061039206 container exec_died 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.buildah.version=1.28.2, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public)
Nov 23 20:54:57 compute-1 sudo[161417]: pam_unix(sudo:session): session closed for user root
Nov 23 20:54:57 compute-1 sudo[163221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:54:57 compute-1 sudo[163221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:54:57 compute-1 sudo[163221]: pam_unix(sudo:session): session closed for user root
Nov 23 20:54:57 compute-1 sudo[163289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 20:54:57 compute-1 sudo[163289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:54:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:57.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:57 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fadf0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:57 compute-1 sudo[163289]: pam_unix(sudo:session): session closed for user root
Nov 23 20:54:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:58 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fadf4000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205458 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 20:54:58 compute-1 ceph-mon[80135]: pgmap v398: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 20:54:58 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:54:58 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:54:58 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 23 20:54:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:58 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae00001970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:58.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:59 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:54:59 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:54:59 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 23 20:54:59 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:54:59 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 20:54:59 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:54:59 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:54:59 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 20:54:59 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 20:54:59 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:54:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:54:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:54:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:59.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:54:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:59 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae08001250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:54:59 compute-1 sudo[165322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:54:59 compute-1 sudo[165322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:54:59 compute-1 sudo[165322]: pam_unix(sudo:session): session closed for user root
Nov 23 20:55:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:55:00 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fadf00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:00 compute-1 ceph-mon[80135]: pgmap v399: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 20:55:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:55:00 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fadf00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:00 compute-1 podman[165851]: 2025-11-23 20:55:00.677316236 +0000 UTC m=+0.090665638 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 23 20:55:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:55:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:00.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:55:00 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:55:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:01.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:55:01 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae00001970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:55:02 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae08001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:02 compute-1 ceph-mon[80135]: pgmap v400: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 20:55:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:55:02 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fadf00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:02.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:55:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:03.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:55:03 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fadf4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:03 compute-1 sudo[167121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 20:55:03 compute-1 sudo[167121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:55:03 compute-1 sudo[167121]: pam_unix(sudo:session): session closed for user root
Nov 23 20:55:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:55:04 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae00001970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:55:04 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae08001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:04.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:04 compute-1 ceph-mon[80135]: pgmap v401: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 20:55:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:55:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:55:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:05.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:55:05 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae08001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:05 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:55:05 compute-1 ceph-mon[80135]: pgmap v402: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 20:55:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:55:06 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fadf4002470 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:55:06 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fadf4002470 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:06.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:07.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:55:07 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fadf4002470 fd 39 proxy ignored for local
Nov 23 20:55:07 compute-1 kernel: ganesha.nfsd[162024]: segfault at 50 ip 00007faec47be32e sp 00007fae88ff8210 error 4 in libntirpc.so.5.8[7faec47a3000+2c000] likely on CPU 4 (core 0, socket 4)
Nov 23 20:55:07 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 20:55:07 compute-1 systemd[1]: Started Process Core Dump (PID 167160/UID 0).
Nov 23 20:55:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:08.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:08 compute-1 ceph-mon[80135]: pgmap v403: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:55:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:09.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:10 compute-1 sshd-session[167162]: Invalid user kevin from 118.145.189.160 port 55606
Nov 23 20:55:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:10.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:10 compute-1 sshd-session[167162]: Received disconnect from 118.145.189.160 port 55606:11: Bye Bye [preauth]
Nov 23 20:55:10 compute-1 sshd-session[167162]: Disconnected from invalid user kevin 118.145.189.160 port 55606 [preauth]
Nov 23 20:55:11 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:55:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:11.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:11 compute-1 systemd-coredump[167161]: Process 153135 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 47:
                                                    #0  0x00007faec47be32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 23 20:55:12 compute-1 ceph-mon[80135]: pgmap v404: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:55:12 compute-1 systemd[1]: systemd-coredump@4-167160-0.service: Deactivated successfully.
Nov 23 20:55:12 compute-1 systemd[1]: systemd-coredump@4-167160-0.service: Consumed 1.082s CPU time.
Nov 23 20:55:12 compute-1 podman[167170]: 2025-11-23 20:55:12.119085321 +0000 UTC m=+0.023051982 container died f2f2a3dd8fa50cb909b3976216c43e6961fdaa3c816f16d084b16b89ca08fb7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 20:55:12 compute-1 systemd[1]: var-lib-containers-storage-overlay-8ffb40541bf744548711f283b1247279c9fdfd60049305220ba9ccbfd0bc0820-merged.mount: Deactivated successfully.
Nov 23 20:55:12 compute-1 podman[167170]: 2025-11-23 20:55:12.18180372 +0000 UTC m=+0.085770371 container remove f2f2a3dd8fa50cb909b3976216c43e6961fdaa3c816f16d084b16b89ca08fb7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True)
Nov 23 20:55:12 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Main process exited, code=exited, status=139/n/a
Nov 23 20:55:12 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Failed with result 'exit-code'.
Nov 23 20:55:12 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.476s CPU time.
Nov 23 20:55:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:12.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:13 compute-1 ceph-mon[80135]: pgmap v405: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:55:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:13.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:14.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:14 compute-1 ceph-mon[80135]: pgmap v406: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:55:15 compute-1 kernel: SELinux:  Converting 2773 SID table entries...
Nov 23 20:55:15 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 20:55:15 compute-1 kernel: SELinux:  policy capability open_perms=1
Nov 23 20:55:15 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 20:55:15 compute-1 kernel: SELinux:  policy capability always_check_network=0
Nov 23 20:55:15 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 20:55:15 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 20:55:15 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 20:55:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:15.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:15 compute-1 ceph-mon[80135]: pgmap v407: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 20:55:16 compute-1 groupadd[167230]: group added to /etc/group: name=dnsmasq, GID=992
Nov 23 20:55:16 compute-1 groupadd[167230]: group added to /etc/gshadow: name=dnsmasq
Nov 23 20:55:16 compute-1 groupadd[167230]: new group: name=dnsmasq, GID=992
Nov 23 20:55:16 compute-1 useradd[167237]: new user: name=dnsmasq, UID=992, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Nov 23 20:55:16 compute-1 dbus-broker-launch[755]: Noticed file-system modification, trigger reload.
Nov 23 20:55:16 compute-1 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 23 20:55:16 compute-1 dbus-broker-launch[755]: Noticed file-system modification, trigger reload.
Nov 23 20:55:16 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:55:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:16.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:17 compute-1 groupadd[167250]: group added to /etc/group: name=clevis, GID=991
Nov 23 20:55:17 compute-1 groupadd[167250]: group added to /etc/gshadow: name=clevis
Nov 23 20:55:17 compute-1 groupadd[167250]: new group: name=clevis, GID=991
Nov 23 20:55:17 compute-1 useradd[167257]: new user: name=clevis, UID=991, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Nov 23 20:55:17 compute-1 usermod[167267]: add 'clevis' to group 'tss'
Nov 23 20:55:17 compute-1 usermod[167267]: add 'clevis' to shadow group 'tss'
Nov 23 20:55:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:55:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:17.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:55:17 compute-1 ceph-mon[80135]: pgmap v408: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:55:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205518 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 20:55:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:18.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:19 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:55:19 compute-1 polkitd[43500]: Reloading rules
Nov 23 20:55:19 compute-1 polkitd[43500]: Collecting garbage unconditionally...
Nov 23 20:55:19 compute-1 polkitd[43500]: Loading rules from directory /etc/polkit-1/rules.d
Nov 23 20:55:19 compute-1 polkitd[43500]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 23 20:55:19 compute-1 polkitd[43500]: Finished loading, compiling and executing 3 rules
Nov 23 20:55:19 compute-1 polkitd[43500]: Reloading rules
Nov 23 20:55:19 compute-1 polkitd[43500]: Collecting garbage unconditionally...
Nov 23 20:55:19 compute-1 polkitd[43500]: Loading rules from directory /etc/polkit-1/rules.d
Nov 23 20:55:19 compute-1 polkitd[43500]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 23 20:55:19 compute-1 polkitd[43500]: Finished loading, compiling and executing 3 rules
Nov 23 20:55:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:19.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:19 compute-1 sudo[167393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:55:19 compute-1 sudo[167393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:55:19 compute-1 sudo[167393]: pam_unix(sudo:session): session closed for user root
Nov 23 20:55:20 compute-1 ceph-mon[80135]: pgmap v409: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:55:20 compute-1 groupadd[167481]: group added to /etc/group: name=ceph, GID=167
Nov 23 20:55:20 compute-1 groupadd[167481]: group added to /etc/gshadow: name=ceph
Nov 23 20:55:20 compute-1 groupadd[167481]: new group: name=ceph, GID=167
Nov 23 20:55:20 compute-1 useradd[167487]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Nov 23 20:55:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:20.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:21 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:55:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:55:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:21.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:55:22 compute-1 ceph-mon[80135]: pgmap v410: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:55:22 compute-1 podman[167497]: 2025-11-23 20:55:22.408765888 +0000 UTC m=+0.060749236 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 20:55:22 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Scheduled restart job, restart counter is at 5.
Nov 23 20:55:22 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 20:55:22 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.476s CPU time.
Nov 23 20:55:22 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 20:55:22 compute-1 podman[167687]: 2025-11-23 20:55:22.6141167 +0000 UTC m=+0.021265610 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:55:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:22.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:22 compute-1 podman[167687]: 2025-11-23 20:55:22.79324864 +0000 UTC m=+0.200397530 container create d38ed78145ce27a698715b902dd179194e031801435ca90af85af498b8f8280c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 23 20:55:22 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcfbd664ea36e63f9ed359c0f79e6328cde03d445ebf8fde9d5034673736dd60/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 20:55:22 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcfbd664ea36e63f9ed359c0f79e6328cde03d445ebf8fde9d5034673736dd60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 20:55:22 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcfbd664ea36e63f9ed359c0f79e6328cde03d445ebf8fde9d5034673736dd60/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 20:55:22 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcfbd664ea36e63f9ed359c0f79e6328cde03d445ebf8fde9d5034673736dd60/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.fuxuha-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 20:55:22 compute-1 podman[167687]: 2025-11-23 20:55:22.850373908 +0000 UTC m=+0.257522818 container init d38ed78145ce27a698715b902dd179194e031801435ca90af85af498b8f8280c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 23 20:55:22 compute-1 podman[167687]: 2025-11-23 20:55:22.857189211 +0000 UTC m=+0.264338101 container start d38ed78145ce27a698715b902dd179194e031801435ca90af85af498b8f8280c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 20:55:22 compute-1 bash[167687]: d38ed78145ce27a698715b902dd179194e031801435ca90af85af498b8f8280c
Nov 23 20:55:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:22 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 20:55:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:22 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 20:55:22 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 20:55:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:22 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 20:55:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:22 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 20:55:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:22 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 20:55:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:22 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 20:55:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:22 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 20:55:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:22 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 20:55:23 compute-1 systemd[1]: Stopping OpenSSH server daemon...
Nov 23 20:55:23 compute-1 sshd[1005]: Received signal 15; terminating.
Nov 23 20:55:23 compute-1 systemd[1]: sshd.service: Deactivated successfully.
Nov 23 20:55:23 compute-1 systemd[1]: Stopped OpenSSH server daemon.
Nov 23 20:55:23 compute-1 systemd[1]: sshd.service: Consumed 7.420s CPU time, read 564.0K from disk, written 316.0K to disk.
Nov 23 20:55:23 compute-1 systemd[1]: Stopped target sshd-keygen.target.
Nov 23 20:55:23 compute-1 systemd[1]: Stopping sshd-keygen.target...
Nov 23 20:55:23 compute-1 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 20:55:23 compute-1 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 20:55:23 compute-1 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 20:55:23 compute-1 systemd[1]: Reached target sshd-keygen.target.
Nov 23 20:55:23 compute-1 systemd[1]: Starting OpenSSH server daemon...
Nov 23 20:55:23 compute-1 sshd[168248]: Server listening on 0.0.0.0 port 22.
Nov 23 20:55:23 compute-1 sshd[168248]: Server listening on :: port 22.
Nov 23 20:55:23 compute-1 systemd[1]: Started OpenSSH server daemon.
Nov 23 20:55:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:23.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:24 compute-1 ceph-mon[80135]: pgmap v411: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:55:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:24.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:25 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 20:55:25 compute-1 systemd[1]: Starting man-db-cache-update.service...
Nov 23 20:55:25 compute-1 systemd[1]: Reloading.
Nov 23 20:55:25 compute-1 systemd-sysv-generator[168511]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:55:25 compute-1 systemd-rc-local-generator[168507]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:55:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:25.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:25 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 20:55:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:26.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:27.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:28 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:55:28 compute-1 ceph-mon[80135]: pgmap v412: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:55:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:28.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:29 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 20:55:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:29 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:55:29 compute-1 ceph-mon[80135]: pgmap v413: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:55:29 compute-1 ceph-mon[80135]: pgmap v414: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:55:29 compute-1 sudo[148826]: pam_unix(sudo:session): session closed for user root
Nov 23 20:55:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:29.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:29 compute-1 sshd-session[172663]: Received disconnect from 102.176.81.29 port 39392:11: Bye Bye [preauth]
Nov 23 20:55:29 compute-1 sshd-session[172663]: Disconnected from authenticating user root 102.176.81.29 port 39392 [preauth]
Nov 23 20:55:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:30.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:55:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:31.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:55:31 compute-1 podman[175911]: 2025-11-23 20:55:31.700211881 +0000 UTC m=+0.100402645 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 23 20:55:31 compute-1 ceph-mon[80135]: pgmap v415: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Nov 23 20:55:32 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 20:55:32 compute-1 systemd[1]: Finished man-db-cache-update.service.
Nov 23 20:55:32 compute-1 systemd[1]: man-db-cache-update.service: Consumed 9.522s CPU time.
Nov 23 20:55:32 compute-1 systemd[1]: run-r55f65184e2804e1fbdfbc34c4cf18147.service: Deactivated successfully.
Nov 23 20:55:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:32.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:33 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:55:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:33.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:33 compute-1 ceph-mon[80135]: pgmap v416: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Nov 23 20:55:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:55:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:34.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 20:55:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:35.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:36 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc0016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:36 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:36.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:37.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:37 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205538 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 20:55:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:38 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:38 compute-1 ceph-mon[80135]: pgmap v417: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 20:55:38 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:55:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:38 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:38.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:39 compute-1 ceph-mon[80135]: pgmap v418: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 20:55:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:55:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:39.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:55:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:39 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:40 compute-1 sudo[176960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:55:40 compute-1 sudo[176960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:55:40 compute-1 sudo[176960]: pam_unix(sudo:session): session closed for user root
Nov 23 20:55:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:40 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:40 compute-1 ceph-mon[80135]: pgmap v419: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 20:55:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:40 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:40.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:41.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:41 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:42 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:42 compute-1 ceph-mon[80135]: pgmap v420: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 20:55:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:42 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:42.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:43 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:55:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:43.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:43 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:43 compute-1 sudo[177112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzxbefywssbndauufxkambdpdjxdpqjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931343.3804448-969-166588988743711/AnsiballZ_systemd.py'
Nov 23 20:55:43 compute-1 sudo[177112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:55:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:44 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:44 compute-1 ceph-mon[80135]: pgmap v421: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Nov 23 20:55:44 compute-1 python3.9[177114]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 20:55:44 compute-1 systemd[1]: Reloading.
Nov 23 20:55:44 compute-1 systemd-rc-local-generator[177144]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:55:44 compute-1 systemd-sysv-generator[177147]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:55:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:44 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:44 compute-1 sudo[177112]: pam_unix(sudo:session): session closed for user root
Nov 23 20:55:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:55:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:44.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:55:45 compute-1 sudo[177302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfzgxqooagnzsnkaxntwxragkappjiiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931344.7378511-969-124661457710031/AnsiballZ_systemd.py'
Nov 23 20:55:45 compute-1 sudo[177302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:55:45 compute-1 python3.9[177304]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 20:55:45 compute-1 systemd[1]: Reloading.
Nov 23 20:55:45 compute-1 systemd-rc-local-generator[177331]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:55:45 compute-1 systemd-sysv-generator[177335]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:55:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:45.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:45 compute-1 sudo[177302]: pam_unix(sudo:session): session closed for user root
Nov 23 20:55:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:45 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:46 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:46 compute-1 sudo[177493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qodbofiyszjwfkixzobksasxipzanobn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931345.803922-969-28179688373707/AnsiballZ_systemd.py'
Nov 23 20:55:46 compute-1 sudo[177493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:55:46 compute-1 ceph-mon[80135]: pgmap v422: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Nov 23 20:55:46 compute-1 python3.9[177495]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 20:55:46 compute-1 systemd[1]: Reloading.
Nov 23 20:55:46 compute-1 systemd-rc-local-generator[177522]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:55:46 compute-1 systemd-sysv-generator[177527]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:55:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:46 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:46 compute-1 sudo[177493]: pam_unix(sudo:session): session closed for user root
Nov 23 20:55:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:55:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:46.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:55:47 compute-1 sudo[177683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtyguwqfswyaadkdkdtmckucdcosmxdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931346.867143-969-25527718849060/AnsiballZ_systemd.py'
Nov 23 20:55:47 compute-1 sudo[177683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:55:47 compute-1 python3.9[177685]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 20:55:47 compute-1 systemd[1]: Reloading.
Nov 23 20:55:47 compute-1 systemd-sysv-generator[177719]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:55:47 compute-1 systemd-rc-local-generator[177715]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:55:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:55:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:47.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:55:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:47 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:47 compute-1 sudo[177683]: pam_unix(sudo:session): session closed for user root
Nov 23 20:55:48 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:48 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc0095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:48 compute-1 ceph-mon[80135]: pgmap v423: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:55:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:55:48 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:55:48 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:48 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:48.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:49 compute-1 sudo[177874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjwfrlaoyvxlsktvkkqpjkgfdawdivdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931348.8722324-1056-169516976891646/AnsiballZ_systemd.py'
Nov 23 20:55:49 compute-1 sudo[177874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:55:49 compute-1 python3.9[177876]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 20:55:49 compute-1 systemd[1]: Reloading.
Nov 23 20:55:49 compute-1 systemd-sysv-generator[177909]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:55:49 compute-1 systemd-rc-local-generator[177906]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:55:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:49.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:49 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:49 compute-1 sudo[177874]: pam_unix(sudo:session): session closed for user root
Nov 23 20:55:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:50 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:50 compute-1 sudo[178065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbrpkbbeqjopnzdtrywbwrdtkwzkwisc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931349.948054-1056-214391031083376/AnsiballZ_systemd.py'
Nov 23 20:55:50 compute-1 ceph-mon[80135]: pgmap v424: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:55:50 compute-1 sudo[178065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:55:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:50 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc0095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:50 compute-1 python3.9[178067]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 20:55:50 compute-1 systemd[1]: Reloading.
Nov 23 20:55:50 compute-1 systemd-rc-local-generator[178099]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:55:50 compute-1 systemd-sysv-generator[178103]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:55:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:55:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:50.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:55:50 compute-1 sudo[178065]: pam_unix(sudo:session): session closed for user root
Nov 23 20:55:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:55:51.050 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 20:55:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:55:51.050 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 20:55:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:55:51.050 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 20:55:51 compute-1 sudo[178255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ousgdrfriyjivmpuqhdinskpxadxgzqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931351.0111861-1056-236606251874923/AnsiballZ_systemd.py'
Nov 23 20:55:51 compute-1 sudo[178255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:55:51 compute-1 python3.9[178257]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 20:55:51 compute-1 systemd[1]: Reloading.
Nov 23 20:55:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:51.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:51 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:51 compute-1 systemd-rc-local-generator[178286]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:55:51 compute-1 systemd-sysv-generator[178291]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:55:51 compute-1 sudo[178255]: pam_unix(sudo:session): session closed for user root
Nov 23 20:55:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:52 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:52 compute-1 ceph-mon[80135]: pgmap v425: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:55:52 compute-1 sudo[178446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxiyubykuehioatnvznsusmqewyyllvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931352.0240855-1056-195459861016309/AnsiballZ_systemd.py'
Nov 23 20:55:52 compute-1 sudo[178446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:55:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:52 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:52 compute-1 python3.9[178448]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 20:55:52 compute-1 podman[178450]: 2025-11-23 20:55:52.616777508 +0000 UTC m=+0.049582396 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 23 20:55:52 compute-1 sudo[178446]: pam_unix(sudo:session): session closed for user root
Nov 23 20:55:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:55:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:52.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:55:53 compute-1 sudo[178620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmllhuqdisjajcqukupylbrwmehhtquv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931352.7376163-1056-172585576039729/AnsiballZ_systemd.py'
Nov 23 20:55:53 compute-1 sudo[178620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:55:53 compute-1 python3.9[178622]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 20:55:53 compute-1 systemd[1]: Reloading.
Nov 23 20:55:53 compute-1 systemd-sysv-generator[178655]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:55:53 compute-1 systemd-rc-local-generator[178649]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:55:53 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:55:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:53.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:53 compute-1 sudo[178620]: pam_unix(sudo:session): session closed for user root
Nov 23 20:55:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:53 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:54 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:54 compute-1 ceph-mon[80135]: pgmap v426: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:55:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:54 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:55:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:54.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:55:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:55.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:55 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:56 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:56 compute-1 ceph-mon[80135]: pgmap v427: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 20:55:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:56 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:56 compute-1 sudo[178811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emoqsfxzhzyrvokjglziwgdavgvoqelf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931356.2613957-1164-197790802295892/AnsiballZ_systemd.py'
Nov 23 20:55:56 compute-1 sudo[178811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:55:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:55:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:56.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:55:56 compute-1 python3.9[178813]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 20:55:56 compute-1 systemd[1]: Reloading.
Nov 23 20:55:57 compute-1 systemd-rc-local-generator[178842]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:55:57 compute-1 systemd-sysv-generator[178846]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:55:57 compute-1 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 23 20:55:57 compute-1 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 23 20:55:57 compute-1 sudo[178811]: pam_unix(sudo:session): session closed for user root
Nov 23 20:55:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:55:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:57.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:55:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:57 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:58 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:58 compute-1 sudo[179004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byxmaiggbdzeeqlhsiaobjvputwjvwma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931357.717295-1188-128951270508491/AnsiballZ_systemd.py'
Nov 23 20:55:58 compute-1 sudo[179004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:55:58 compute-1 ceph-mon[80135]: pgmap v428: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:55:58 compute-1 python3.9[179006]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 20:55:58 compute-1 sudo[179004]: pam_unix(sudo:session): session closed for user root
Nov 23 20:55:58 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:55:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:58 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:58.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:58 compute-1 sudo[179159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnhvtxuipmxklppbqnsnolbbuuupgixe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931358.5833428-1188-167097880111813/AnsiballZ_systemd.py'
Nov 23 20:55:58 compute-1 sudo[179159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:55:59 compute-1 python3.9[179161]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 20:55:59 compute-1 sudo[179159]: pam_unix(sudo:session): session closed for user root
Nov 23 20:55:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:55:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:55:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:59.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:55:59 compute-1 sudo[179315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gagzdrcjvuefomfnubodcnsabfhdgqnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931359.3774822-1188-261131088789892/AnsiballZ_systemd.py'
Nov 23 20:55:59 compute-1 sudo[179315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:55:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:59 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:55:59 compute-1 python3.9[179317]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 20:56:00 compute-1 sudo[179315]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:00 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:00 compute-1 sudo[179341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:56:00 compute-1 sudo[179341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:56:00 compute-1 sudo[179341]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:00 compute-1 ceph-mon[80135]: pgmap v429: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:56:00 compute-1 sudo[179495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfbeayzdffzqqvavokbmhzajpollqrwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931360.1427288-1188-211668785864197/AnsiballZ_systemd.py'
Nov 23 20:56:00 compute-1 sudo[179495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:00 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:00 compute-1 python3.9[179497]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 20:56:00 compute-1 sudo[179495]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:00.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:01 compute-1 sudo[179650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvgrrsesjzzrhulubkcxcldexgqtvrcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931360.9113429-1188-84393979939840/AnsiballZ_systemd.py'
Nov 23 20:56:01 compute-1 sudo[179650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:01 compute-1 python3.9[179652]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 20:56:01 compute-1 sudo[179650]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:01.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:01 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:01 compute-1 sudo[179815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlasgadwgyqaqldktqzaetnlfpfioxue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931361.6624155-1188-95803495682362/AnsiballZ_systemd.py'
Nov 23 20:56:01 compute-1 sudo[179815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:01 compute-1 podman[179780]: 2025-11-23 20:56:01.969235294 +0000 UTC m=+0.076518638 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 23 20:56:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:02 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:02 compute-1 python3.9[179824]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 20:56:02 compute-1 ceph-mon[80135]: pgmap v430: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:56:02 compute-1 sudo[179815]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:02 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:02 compute-1 sudo[179986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxmsoewlkoshxfahhuzjnjoiyrnvurtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931362.451781-1188-36625743174337/AnsiballZ_systemd.py'
Nov 23 20:56:02 compute-1 sudo[179986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:02.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:02 compute-1 python3.9[179988]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 20:56:03 compute-1 sudo[179986]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:56:03 compute-1 sudo[180141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndqkztedlxezbrdlhabvhhdzjtwtblbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931363.1919591-1188-218780220675320/AnsiballZ_systemd.py'
Nov 23 20:56:03 compute-1 sudo[180141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:03 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:56:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:03.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:03 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:03 compute-1 python3.9[180143]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 20:56:03 compute-1 sudo[180141]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:04 compute-1 sudo[180223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:56:04 compute-1 sudo[180223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:56:04 compute-1 sudo[180223]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:04 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:04 compute-1 sudo[180256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 20:56:04 compute-1 sudo[180256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:56:04 compute-1 sudo[180347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbdpdouqujxncfnvflnjxqxjfiuicfcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931363.9533398-1188-223481318431811/AnsiballZ_systemd.py'
Nov 23 20:56:04 compute-1 sudo[180347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:04 compute-1 ceph-mon[80135]: pgmap v431: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:56:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:04 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:04 compute-1 python3.9[180349]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 20:56:04 compute-1 sudo[180256]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:04 compute-1 sudo[180347]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:56:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:04.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:56:05 compute-1 sudo[180533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tubhonprgyrikhmjwyimttudqhkgizjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931364.746476-1188-206117606693335/AnsiballZ_systemd.py'
Nov 23 20:56:05 compute-1 sudo[180533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:05 compute-1 python3.9[180535]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 20:56:05 compute-1 sudo[180533]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:05.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:05 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:05 compute-1 sudo[180689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dthnbqlnglnihcghyqemanferczbrmqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931365.535917-1188-215985247399859/AnsiballZ_systemd.py'
Nov 23 20:56:05 compute-1 sudo[180689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:06 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:06 compute-1 python3.9[180691]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 20:56:06 compute-1 sudo[180689]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:06 compute-1 ceph-mon[80135]: pgmap v432: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 20:56:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:06 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:06 compute-1 sudo[180844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywjnepjpbkpkceesqtfkxohravynupyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931366.3118181-1188-172899178925123/AnsiballZ_systemd.py'
Nov 23 20:56:06 compute-1 sudo[180844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:06.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:06 compute-1 python3.9[180846]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 20:56:06 compute-1 sudo[180844]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:07 compute-1 sudo[180999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etahmmmflxjyvczwgrqdywaoxsyhofwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931367.053779-1188-72534937324289/AnsiballZ_systemd.py'
Nov 23 20:56:07 compute-1 sudo[180999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:07 compute-1 python3.9[181001]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 20:56:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:07.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:07 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:07 compute-1 sudo[180999]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:08 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:08 compute-1 sudo[181157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thxxsllrvjjkfyqsiycrxuymveawavvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931367.8013086-1188-233394125055941/AnsiballZ_systemd.py'
Nov 23 20:56:08 compute-1 sudo[181157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:08 compute-1 python3.9[181159]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 20:56:08 compute-1 sudo[181157]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:08 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:56:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:08 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:08 compute-1 ceph-mon[80135]: pgmap v433: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:56:08 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:56:08 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:56:08 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:56:08 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 20:56:08 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:56:08 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:56:08 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 20:56:08 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 20:56:08 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:56:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:08.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:09.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:09 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:09 compute-1 ceph-mon[80135]: pgmap v434: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:56:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:10 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:10 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:10.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:11.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:11 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:12 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:12 compute-1 ceph-mon[80135]: pgmap v435: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:56:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:12 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:12 compute-1 sudo[181189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 20:56:12 compute-1 sudo[181189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:56:12 compute-1 sudo[181189]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:12.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:13 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:56:13 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:56:13 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:56:13 compute-1 ceph-mon[80135]: pgmap v436: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:56:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:13.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:13 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:14 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:14 compute-1 sudo[181340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trperevmtagtxjjbtlmnrqnbhwiwmetq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931373.9203424-1494-228688889061860/AnsiballZ_file.py'
Nov 23 20:56:14 compute-1 sudo[181340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:14 compute-1 python3.9[181342]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:56:14 compute-1 sudo[181340]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:14 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:14 compute-1 sudo[181492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrnmsvtmfywtsvvptsckhsszyvwweaqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931374.4921374-1494-219540215256901/AnsiballZ_file.py'
Nov 23 20:56:14 compute-1 sudo[181492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:14.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:14 compute-1 python3.9[181494]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:56:15 compute-1 sudo[181492]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:15 compute-1 sudo[181644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwwrmddrxcmjhdocfnjmwtzjtxuzojig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931375.1684046-1494-150551053598741/AnsiballZ_file.py'
Nov 23 20:56:15 compute-1 sudo[181644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:15.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:15 compute-1 python3.9[181646]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:56:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:15 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:15 compute-1 sudo[181644]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:15 compute-1 ceph-mon[80135]: pgmap v437: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 20:56:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:16 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:16 compute-1 sudo[181797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nddhblskxlwqqqqigfagbqtqmnunumjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931375.8741186-1494-254916042884895/AnsiballZ_file.py'
Nov 23 20:56:16 compute-1 sudo[181797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:16 compute-1 python3.9[181799]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:56:16 compute-1 sudo[181797]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:16 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:16 compute-1 sudo[181949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uebvxbrqdgtwmbpvirtpnycstrjaoafz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931376.50359-1494-81169866299929/AnsiballZ_file.py'
Nov 23 20:56:16 compute-1 sudo[181949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:56:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:16.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:56:17 compute-1 python3.9[181951]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:56:17 compute-1 sudo[181949]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:17 compute-1 sudo[182101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfynkvffwkymafxyeuevmulmznznfrxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931377.2053266-1494-110477470387853/AnsiballZ_file.py'
Nov 23 20:56:17 compute-1 sudo[182101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:17 compute-1 python3.9[182103]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:56:17 compute-1 sudo[182101]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:56:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:17.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:56:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:17 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:18 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:18 compute-1 ceph-mon[80135]: pgmap v438: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:56:18 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:56:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:18 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:56:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:18.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:56:19 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:56:19 compute-1 sudo[182254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axqjrhytgcuhttloevszqezuirxkinyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931378.7874835-1623-122150935213093/AnsiballZ_stat.py'
Nov 23 20:56:19 compute-1 sudo[182254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:19 compute-1 python3.9[182256]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:56:19 compute-1 sudo[182254]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:19.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:19 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:19 compute-1 sudo[182380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eegovhayzxbnvuikmwfhdempogjeiujw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931378.7874835-1623-122150935213093/AnsiballZ_copy.py'
Nov 23 20:56:19 compute-1 sudo[182380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:20 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:20 compute-1 python3.9[182382]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763931378.7874835-1623-122150935213093/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:20 compute-1 sudo[182380]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:20 compute-1 sudo[182383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:56:20 compute-1 sudo[182383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:56:20 compute-1 sudo[182383]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:20 compute-1 ceph-mon[80135]: pgmap v439: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:56:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:20 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:20 compute-1 sudo[182557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dstsajwomhhfjibdhcbutffmtaalvvab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931380.3727849-1623-162860335058016/AnsiballZ_stat.py'
Nov 23 20:56:20 compute-1 sudo[182557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:20.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:20 compute-1 python3.9[182559]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:56:20 compute-1 sudo[182557]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:21 compute-1 sudo[182682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wumsjhvuiobwqmjguyxydvrnmrvrpvvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931380.3727849-1623-162860335058016/AnsiballZ_copy.py'
Nov 23 20:56:21 compute-1 sudo[182682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:21 compute-1 python3.9[182684]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763931380.3727849-1623-162860335058016/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:21 compute-1 sudo[182682]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:21.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:21 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:21 compute-1 sudo[182835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lenofhzniopzuyxosseiszstadphimmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931381.5461602-1623-225395930069391/AnsiballZ_stat.py'
Nov 23 20:56:21 compute-1 sudo[182835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:22 compute-1 python3.9[182837]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:56:22 compute-1 sudo[182835]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:22 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:22 compute-1 ceph-mon[80135]: pgmap v440: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:56:22 compute-1 sudo[182960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzskdutdjzqrbqkcpmxltfvhlmaacmma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931381.5461602-1623-225395930069391/AnsiballZ_copy.py'
Nov 23 20:56:22 compute-1 sudo[182960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:22 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:22 compute-1 python3.9[182962]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763931381.5461602-1623-225395930069391/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:22 compute-1 sudo[182960]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:22.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:23 compute-1 sudo[183130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gavbhspdxrtzptoqveseoovcmwszfjmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931382.7664363-1623-120764517474096/AnsiballZ_stat.py'
Nov 23 20:56:23 compute-1 sudo[183130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:23 compute-1 podman[183086]: 2025-11-23 20:56:23.047618578 +0000 UTC m=+0.051255272 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 23 20:56:23 compute-1 python3.9[183134]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:56:23 compute-1 sudo[183130]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:23 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:56:23 compute-1 sudo[183258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guwfspjgvmgpldfsylqwfhlonfmgptpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931382.7664363-1623-120764517474096/AnsiballZ_copy.py'
Nov 23 20:56:23 compute-1 sudo[183258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:23.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:23 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:23 compute-1 python3.9[183260]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763931382.7664363-1623-120764517474096/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:23 compute-1 sudo[183258]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:24 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:24 compute-1 sudo[183410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znfkbwjtjrzqqqbyxodfxmqqpedrmwps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931383.9456756-1623-73524656600136/AnsiballZ_stat.py'
Nov 23 20:56:24 compute-1 sudo[183410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:24 compute-1 ceph-mon[80135]: pgmap v441: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:56:24 compute-1 python3.9[183412]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:56:24 compute-1 sudo[183410]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:24 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:24 compute-1 sudo[183535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqqbmijnygbgiknbbbzuplxtifbxmvaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931383.9456756-1623-73524656600136/AnsiballZ_copy.py'
Nov 23 20:56:24 compute-1 sudo[183535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:24.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:24 compute-1 python3.9[183537]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763931383.9456756-1623-73524656600136/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:24 compute-1 sudo[183535]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:25 compute-1 sudo[183687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehtiderjbowmqarpmnwlkyvuynkbbpmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931385.0416245-1623-221571323816832/AnsiballZ_stat.py'
Nov 23 20:56:25 compute-1 sudo[183687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:25 compute-1 python3.9[183689]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:56:25 compute-1 sudo[183687]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:56:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:25.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:56:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:25 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205625 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 20:56:25 compute-1 sudo[183813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgwggmjdekxqkryofwvybyeetliwnddi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931385.0416245-1623-221571323816832/AnsiballZ_copy.py'
Nov 23 20:56:25 compute-1 sudo[183813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:26 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:26 compute-1 python3.9[183815]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763931385.0416245-1623-221571323816832/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:26 compute-1 sudo[183813]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:26 compute-1 ceph-mon[80135]: pgmap v442: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:56:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:26 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:26 compute-1 sudo[183965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqgwsrnlbpjiyqdlmatsnelgvvyxgdke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931386.2785227-1623-19103682277062/AnsiballZ_stat.py'
Nov 23 20:56:26 compute-1 sudo[183965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:26 compute-1 python3.9[183967]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:56:26 compute-1 sudo[183965]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:56:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:26.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:56:27 compute-1 sudo[184088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leqdgjsmfbycjyjbalngqbrdddvftbei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931386.2785227-1623-19103682277062/AnsiballZ_copy.py'
Nov 23 20:56:27 compute-1 sudo[184088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:27 compute-1 python3.9[184090]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763931386.2785227-1623-19103682277062/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:27 compute-1 sudo[184088]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:27 compute-1 sudo[184241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzvixjscevkmjlyyukmbjrmvquzetmio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931387.408747-1623-211513633385040/AnsiballZ_stat.py'
Nov 23 20:56:27 compute-1 sudo[184241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:27.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:27 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:27 compute-1 python3.9[184243]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:56:27 compute-1 sudo[184241]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:28 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:28 compute-1 sudo[184366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwwnicqhrulexmljozffguvjqiidzvgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931387.408747-1623-211513633385040/AnsiballZ_copy.py'
Nov 23 20:56:28 compute-1 sudo[184366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:28 compute-1 python3.9[184368]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763931387.408747-1623-211513633385040/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:28 compute-1 ceph-mon[80135]: pgmap v443: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:56:28 compute-1 sudo[184366]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:28 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:56:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:28 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:28.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 20:56:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:29.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 20:56:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:29 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:30 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:30 compute-1 ceph-mon[80135]: pgmap v444: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:56:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:30 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:30.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:31 compute-1 sudo[184519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdshdvxxmifhwucilkaxxfrdlsgmhekx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931391.0053968-1962-230289800233941/AnsiballZ_command.py'
Nov 23 20:56:31 compute-1 sudo[184519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:31 compute-1 python3.9[184521]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 23 20:56:31 compute-1 sudo[184519]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:31.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:31 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:32 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:32 compute-1 sudo[184692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urrjyqdbfpgwvgkjesiqnwawihiwnldr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931391.9360125-1989-278514078958418/AnsiballZ_file.py'
Nov 23 20:56:32 compute-1 sudo[184692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:32 compute-1 podman[184647]: 2025-11-23 20:56:32.27170644 +0000 UTC m=+0.096965894 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 23 20:56:32 compute-1 ceph-mon[80135]: pgmap v445: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:56:32 compute-1 python3.9[184695]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:32 compute-1 sudo[184692]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:32 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:56:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:32.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:56:32 compute-1 sudo[184851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzavbtxakifsxkuqxuzhmusenuwgsptr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931392.611382-1989-279846053977152/AnsiballZ_file.py'
Nov 23 20:56:32 compute-1 sudo[184851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:33 compute-1 python3.9[184853]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:33 compute-1 sudo[184851]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:56:33 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:56:33 compute-1 sudo[185004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnbpbhwrizkrxozmcmjwqmiabjnengui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931393.2594614-1989-39022153279869/AnsiballZ_file.py'
Nov 23 20:56:33 compute-1 sudo[185004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:33.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:33 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:33 compute-1 python3.9[185006]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:33 compute-1 sudo[185004]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:34 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:34 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:34 compute-1 sudo[185156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgmeinmrxcnsbrpoiabpnnlvbziokyzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931393.8899133-1989-182165910172411/AnsiballZ_file.py'
Nov 23 20:56:34 compute-1 sudo[185156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:34 compute-1 python3.9[185158]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:34 compute-1 sudo[185156]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:34 compute-1 ceph-mon[80135]: pgmap v446: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:56:34 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:34 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:34 compute-1 sudo[185308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikarnnzyljykndilwcanaxtzpcolnith ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931394.476327-1989-3775809255800/AnsiballZ_file.py'
Nov 23 20:56:34 compute-1 sudo[185308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:34.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:34 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:34 : epoch 692374ba : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 20:56:34 compute-1 python3.9[185310]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:34 compute-1 sudo[185308]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:35 compute-1 sudo[185460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xijdskasrfehyrnjbhssiplgzxqajwqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931395.114972-1989-158969059541381/AnsiballZ_file.py'
Nov 23 20:56:35 compute-1 sudo[185460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:35 compute-1 python3.9[185462]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:35 compute-1 sudo[185460]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:35.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:36 compute-1 sudo[185613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eryyphjujvmcmbzgtxitxcrojobkjheg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931395.7168236-1989-151478242231342/AnsiballZ_file.py'
Nov 23 20:56:36 compute-1 sudo[185613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:36 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:36 compute-1 python3.9[185615]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:36 compute-1 sudo[185613]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:36 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:36 compute-1 ceph-mon[80135]: pgmap v447: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:56:36 compute-1 sudo[185765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aujmcafirhszpdnonbcmyhvthlqnuiel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931396.3670106-1989-219117529241910/AnsiballZ_file.py'
Nov 23 20:56:36 compute-1 sudo[185765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:36.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:36 compute-1 python3.9[185767]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:36 compute-1 sudo[185765]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:37 compute-1 sudo[185917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abzofaiqjoqzttomhwjedlumfpfyilck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931397.0373437-1989-48709591619951/AnsiballZ_file.py'
Nov 23 20:56:37 compute-1 sudo[185917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:37 compute-1 python3.9[185919]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:37 compute-1 sudo[185917]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:37 compute-1 ceph-mon[80135]: pgmap v448: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.625706) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931397625765, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4699, "num_deletes": 502, "total_data_size": 12906832, "memory_usage": 13076424, "flush_reason": "Manual Compaction"}
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Nov 23 20:56:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:37 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:56:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:37.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931397717581, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 8359352, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13302, "largest_seqno": 17996, "table_properties": {"data_size": 8341630, "index_size": 11976, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4677, "raw_key_size": 36450, "raw_average_key_size": 19, "raw_value_size": 8305186, "raw_average_value_size": 4482, "num_data_blocks": 524, "num_entries": 1853, "num_filter_entries": 1853, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930950, "oldest_key_time": 1763930950, "file_creation_time": 1763931397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 91911 microseconds, and 13768 cpu microseconds.
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.717626) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 8359352 bytes OK
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.717642) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.720272) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.720287) EVENT_LOG_v1 {"time_micros": 1763931397720283, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.720302) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 12886541, prev total WAL file size 12886541, number of live WAL files 2.
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.722753) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(8163KB)], [27(12MB)]
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931397722794, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 21794607, "oldest_snapshot_seqno": -1}
Nov 23 20:56:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:37 : epoch 692374ba : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 20:56:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:37 : epoch 692374ba : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 5079 keys, 15937737 bytes, temperature: kUnknown
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931397862123, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 15937737, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15898966, "index_size": 24965, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12741, "raw_key_size": 127060, "raw_average_key_size": 25, "raw_value_size": 15802094, "raw_average_value_size": 3111, "num_data_blocks": 1050, "num_entries": 5079, "num_filter_entries": 5079, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763931397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.862512) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 15937737 bytes
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.892250) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.3 rd, 114.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(8.0, 12.8 +0.0 blob) out(15.2 +0.0 blob), read-write-amplify(4.5) write-amplify(1.9) OK, records in: 6101, records dropped: 1022 output_compression: NoCompression
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.892325) EVENT_LOG_v1 {"time_micros": 1763931397892294, "job": 14, "event": "compaction_finished", "compaction_time_micros": 139434, "compaction_time_cpu_micros": 32364, "output_level": 6, "num_output_files": 1, "total_output_size": 15937737, "num_input_records": 6101, "num_output_records": 5079, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931397894975, "job": 14, "event": "table_file_deletion", "file_number": 29}
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931397899026, "job": 14, "event": "table_file_deletion", "file_number": 27}
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.722665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.899096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.899108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.899111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.899114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:56:37 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.899116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:56:37 compute-1 sudo[186071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzcfpqlconpazmfoiwslygvmzvvibfwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931397.670186-1989-133760214829783/AnsiballZ_file.py'
Nov 23 20:56:37 compute-1 sudo[186071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:38 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:38 compute-1 python3.9[186073]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:38 compute-1 sudo[186071]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:38 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:56:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:38 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:38 compute-1 sudo[186225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzkymqhkazumvqyhwbldbnlhqgthtuiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931398.321811-1989-272204729465413/AnsiballZ_file.py'
Nov 23 20:56:38 compute-1 sudo[186225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:38 compute-1 python3.9[186227]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:38 compute-1 sudo[186225]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:38.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:39 compute-1 sudo[186377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyqirppspxrfsdpchxkwxraxjfnfcmtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931398.9547246-1989-187822548557520/AnsiballZ_file.py'
Nov 23 20:56:39 compute-1 sudo[186377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:39 compute-1 python3.9[186379]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:39 compute-1 sudo[186377]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:39 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:39.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:39 compute-1 sudo[186530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kupgeojeogkjrocudignagproaooiyyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931399.5705404-1989-270503321032045/AnsiballZ_file.py'
Nov 23 20:56:39 compute-1 sudo[186530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:39 compute-1 ceph-mon[80135]: pgmap v449: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:56:39 compute-1 python3.9[186532]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:40 compute-1 sudo[186530]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:40 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:40 compute-1 sudo[186609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:56:40 compute-1 sudo[186609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:56:40 compute-1 sudo[186609]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:40 compute-1 sudo[186707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-terwxgfdcfptbkffpuzpeansuncthpns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931400.1827388-1989-101592294283408/AnsiballZ_file.py'
Nov 23 20:56:40 compute-1 sudo[186707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:40 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:40.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:40 : epoch 692374ba : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 20:56:41 compute-1 python3.9[186709]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:41 compute-1 sudo[186707]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:41 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb4000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:41.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:42 compute-1 ceph-mon[80135]: pgmap v450: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:56:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:42 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:42 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:42.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:43 compute-1 sudo[186862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgquhimxyayfqcazfdgghksnfazsgudt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931402.9988172-2286-243379155914923/AnsiballZ_stat.py'
Nov 23 20:56:43 compute-1 sudo[186862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:43 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:56:43 compute-1 python3.9[186864]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:56:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:43 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb4000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:43.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:43 compute-1 sudo[186862]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:44 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:44 compute-1 sudo[186985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfckqymdatlkrzobeoqmydpkuledxpzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931402.9988172-2286-243379155914923/AnsiballZ_copy.py'
Nov 23 20:56:44 compute-1 sudo[186985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:44 compute-1 ceph-mon[80135]: pgmap v451: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:56:44 compute-1 python3.9[186987]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931402.9988172-2286-243379155914923/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:44 compute-1 sudo[186985]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:44 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:44 compute-1 sudo[187137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rynxhzdehnpupupvmjrjoqmvlftgfiut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931404.4622972-2286-126981386187817/AnsiballZ_stat.py'
Nov 23 20:56:44 compute-1 sudo[187137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:44.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:44 compute-1 python3.9[187139]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:56:44 compute-1 sudo[187137]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:45 compute-1 sudo[187260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysnngeoouttwvhltsvsqumvqmuznmkhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931404.4622972-2286-126981386187817/AnsiballZ_copy.py'
Nov 23 20:56:45 compute-1 sudo[187260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:45 compute-1 python3.9[187262]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931404.4622972-2286-126981386187817/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:45 compute-1 sudo[187260]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:45 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:45.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:45 compute-1 sshd-session[187263]: Invalid user web3 from 92.118.39.92 port 46606
Nov 23 20:56:45 compute-1 sshd-session[187263]: Connection closed by invalid user web3 92.118.39.92 port 46606 [preauth]
Nov 23 20:56:45 compute-1 sudo[187415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giadvreitcegjqygoldwqrvhlmnwtjos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931405.6262276-2286-226957958230948/AnsiballZ_stat.py'
Nov 23 20:56:45 compute-1 sudo[187415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:46 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb4001ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:46 compute-1 python3.9[187417]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:56:46 compute-1 sudo[187415]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:46 compute-1 ceph-mon[80135]: pgmap v452: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 20:56:46 compute-1 sudo[187538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpzgcjbrlhxexngpvexbikidroguruis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931405.6262276-2286-226957958230948/AnsiballZ_copy.py'
Nov 23 20:56:46 compute-1 sudo[187538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:46 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:46 compute-1 python3.9[187540]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931405.6262276-2286-226957958230948/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:46 compute-1 sudo[187538]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:56:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:46.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:56:47 compute-1 sudo[187690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amfbncnktfxvkkcsnvdojwzlumamkdtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931406.8141205-2286-203052722212412/AnsiballZ_stat.py'
Nov 23 20:56:47 compute-1 sudo[187690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:47 compute-1 python3.9[187692]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:56:47 compute-1 sudo[187690]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:47 compute-1 sudo[187814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqeuhubctmshduozblbrmlcmgpuujiql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931406.8141205-2286-203052722212412/AnsiballZ_copy.py'
Nov 23 20:56:47 compute-1 sudo[187814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:47 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:47.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205647 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 20:56:47 compute-1 python3.9[187816]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931406.8141205-2286-203052722212412/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:47 compute-1 sudo[187814]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:48 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:48 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc0091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:48 compute-1 ceph-mon[80135]: pgmap v453: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:56:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:56:48 compute-1 sudo[187966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiibafxcjddyccenfdbvvlszexjjixmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931407.9962013-2286-257078694264926/AnsiballZ_stat.py'
Nov 23 20:56:48 compute-1 sudo[187966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:48 compute-1 python3.9[187968]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:56:48 compute-1 sudo[187966]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:48 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:56:48 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:48 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb4001ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:48 compute-1 sudo[188091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ercogamrzpebvenmixmfqfajcrbklrgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931407.9962013-2286-257078694264926/AnsiballZ_copy.py'
Nov 23 20:56:48 compute-1 sudo[188091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:56:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:48.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:56:48 compute-1 python3.9[188093]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931407.9962013-2286-257078694264926/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:48 compute-1 sudo[188091]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:49 compute-1 sudo[188243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pamcnhdtfhhpuuphtazjfzownmwezvqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931409.110169-2286-136955137666298/AnsiballZ_stat.py'
Nov 23 20:56:49 compute-1 sudo[188243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:49 compute-1 sshd-session[187989]: Invalid user master from 102.176.81.29 port 42004
Nov 23 20:56:49 compute-1 python3.9[188245]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:56:49 compute-1 sudo[188243]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:49 compute-1 sshd-session[187989]: Received disconnect from 102.176.81.29 port 42004:11: Bye Bye [preauth]
Nov 23 20:56:49 compute-1 sshd-session[187989]: Disconnected from invalid user master 102.176.81.29 port 42004 [preauth]
Nov 23 20:56:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:49 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:49.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:49 compute-1 sudo[188367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eskphsvmvwnafpqbdejufvtaqmhhgtoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931409.110169-2286-136955137666298/AnsiballZ_copy.py'
Nov 23 20:56:49 compute-1 sudo[188367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:50 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:50 compute-1 python3.9[188369]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931409.110169-2286-136955137666298/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:50 compute-1 sudo[188367]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:50 compute-1 ceph-mon[80135]: pgmap v454: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:56:50 compute-1 sudo[188519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcupkcutbdkqghxffgwgwoqwvefpjsmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931410.2447076-2286-171263410337530/AnsiballZ_stat.py'
Nov 23 20:56:50 compute-1 sudo[188519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:50 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:50 compute-1 python3.9[188521]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:56:50 compute-1 sudo[188519]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:56:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:50.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:56:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:56:51.052 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 20:56:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:56:51.053 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 20:56:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:56:51.053 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 20:56:51 compute-1 auditd[701]: Audit daemon rotating log files
Nov 23 20:56:51 compute-1 sudo[188642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnmyvcbskfhyvfwzbxcqupnvftqidkko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931410.2447076-2286-171263410337530/AnsiballZ_copy.py'
Nov 23 20:56:51 compute-1 sudo[188642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:51 compute-1 python3.9[188644]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931410.2447076-2286-171263410337530/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:51 compute-1 sudo[188642]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:51 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb4001ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:51.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:51 compute-1 sudo[188795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-henexwveerzogxqxwovntyycjarvdifk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931411.5123878-2286-46819556889691/AnsiballZ_stat.py'
Nov 23 20:56:51 compute-1 sudo[188795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:51 compute-1 python3.9[188797]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:56:52 compute-1 sudo[188795]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:52 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003e50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:52 compute-1 ceph-mon[80135]: pgmap v455: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:56:52 compute-1 sudo[188918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dedeiowafrkzjrjbyzzjukfksccseury ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931411.5123878-2286-46819556889691/AnsiballZ_copy.py'
Nov 23 20:56:52 compute-1 sudo[188918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:52 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc0091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:52 compute-1 python3.9[188920]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931411.5123878-2286-46819556889691/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:52 compute-1 sudo[188918]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:56:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:52.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:56:53 compute-1 sudo[189070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waicphiolgoefuvfsvawlgqsbtytwexg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931412.7903717-2286-109300093662409/AnsiballZ_stat.py'
Nov 23 20:56:53 compute-1 sudo[189070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:53 compute-1 podman[189072]: 2025-11-23 20:56:53.148814903 +0000 UTC m=+0.053708777 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 23 20:56:53 compute-1 python3.9[189073]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:56:53 compute-1 sudo[189070]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:53 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:56:53 compute-1 sudo[189214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arbvqolixsfeoghieubfewrvobqostsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931412.7903717-2286-109300093662409/AnsiballZ_copy.py'
Nov 23 20:56:53 compute-1 sudo[189214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:53 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:56:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:53.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:56:53 compute-1 python3.9[189216]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931412.7903717-2286-109300093662409/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:53 compute-1 sudo[189214]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:54 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb40030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:54 compute-1 sudo[189366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuivnnbiesteskjstlxhutgyouwfvkfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931413.9631433-2286-72832770629622/AnsiballZ_stat.py'
Nov 23 20:56:54 compute-1 sudo[189366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:54 compute-1 ceph-mon[80135]: pgmap v456: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:56:54 compute-1 python3.9[189368]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:56:54 compute-1 sudo[189366]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:54 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003e70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:54 compute-1 sudo[189489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzkxkaeeeepicqkwjkoyflxrkzjpowtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931413.9631433-2286-72832770629622/AnsiballZ_copy.py'
Nov 23 20:56:54 compute-1 sudo[189489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:54.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:54 compute-1 python3.9[189491]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931413.9631433-2286-72832770629622/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:54 compute-1 sudo[189489]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:55 compute-1 sudo[189641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmjttsqwhihzsnaaonxabpwcwwchdpcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931415.1074858-2286-151123828464600/AnsiballZ_stat.py'
Nov 23 20:56:55 compute-1 sudo[189641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:55 compute-1 python3.9[189643]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:56:55 compute-1 sudo[189641]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:55 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc0091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205655 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 20:56:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:56:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:55.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:56:55 compute-1 sudo[189765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqyxkfuemgejeshgzulmnltzsiguirlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931415.1074858-2286-151123828464600/AnsiballZ_copy.py'
Nov 23 20:56:55 compute-1 sudo[189765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:56 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:56 compute-1 python3.9[189767]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931415.1074858-2286-151123828464600/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:56 compute-1 sudo[189765]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:56 compute-1 ceph-mon[80135]: pgmap v457: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:56:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:56 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb40030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:56 compute-1 sudo[189917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yirbaqhbmsblayerfdahkzjwunhqjiyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931416.2932029-2286-55001929404330/AnsiballZ_stat.py'
Nov 23 20:56:56 compute-1 sudo[189917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:56 compute-1 python3.9[189919]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:56:56 compute-1 sudo[189917]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:56.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:57 compute-1 sudo[190040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnmdrbvfwsigusisvzizhcvyuqptnfwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931416.2932029-2286-55001929404330/AnsiballZ_copy.py'
Nov 23 20:56:57 compute-1 sudo[190040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:57 compute-1 python3.9[190042]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931416.2932029-2286-55001929404330/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:57 compute-1 sudo[190040]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:57 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:56:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:57.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:56:57 compute-1 sudo[190193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgqvsdrqolpnfncxqamnteepvlidigmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931417.500531-2286-270607796280990/AnsiballZ_stat.py'
Nov 23 20:56:57 compute-1 sudo[190193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:57 compute-1 python3.9[190195]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:56:57 compute-1 sudo[190193]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:58 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc0091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:58 compute-1 ceph-mon[80135]: pgmap v458: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:56:58 compute-1 sudo[190316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syjvyoeyrpxbxucgzkwbhqloyglpdnom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931417.500531-2286-270607796280990/AnsiballZ_copy.py'
Nov 23 20:56:58 compute-1 sudo[190316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:58 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:56:58 compute-1 python3.9[190318]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931417.500531-2286-270607796280990/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:58 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:58 compute-1 sudo[190316]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:56:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:58.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:56:58 compute-1 sudo[190468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihimjxydzxwcwwcglkparvboacwlxwxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931418.7044406-2286-136484090450146/AnsiballZ_stat.py'
Nov 23 20:56:58 compute-1 sudo[190468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:59 compute-1 python3.9[190470]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:56:59 compute-1 sudo[190468]: pam_unix(sudo:session): session closed for user root
Nov 23 20:56:59 compute-1 sudo[190592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opqamynurmtmwrfnacisywfanfrxohxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931418.7044406-2286-136484090450146/AnsiballZ_copy.py'
Nov 23 20:56:59 compute-1 sudo[190592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:56:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:59 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb4003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:56:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:56:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:56:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:59.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:56:59 compute-1 python3.9[190594]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931418.7044406-2286-136484090450146/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:56:59 compute-1 sudo[190592]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:00 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:00 compute-1 ceph-mon[80135]: pgmap v459: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:57:00 compute-1 sudo[190619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:57:00 compute-1 sudo[190619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:57:00 compute-1 sudo[190619]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:00 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc0091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:00.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:01 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:01.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:02 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb4003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:02 compute-1 ceph-mon[80135]: pgmap v460: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:57:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:02 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:02 compute-1 podman[190645]: 2025-11-23 20:57:02.663059474 +0000 UTC m=+0.075007816 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 23 20:57:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:02.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:57:03 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:57:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:03 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc0091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:03.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:04 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:04 compute-1 ceph-mon[80135]: pgmap v461: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:57:04 compute-1 python3.9[190797]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:57:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:04 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb4003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:57:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:04.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:57:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:04 : epoch 692374ba : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 20:57:05 compute-1 sudo[190950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqjpvtklkpzuolhduustcojfgkqwhtxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931425.0055678-2904-81916769789416/AnsiballZ_seboolean.py'
Nov 23 20:57:05 compute-1 sudo[190950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:05 compute-1 python3.9[190952]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 23 20:57:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:05 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003ef0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:05.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:06 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003ef0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:06 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:06.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:06 compute-1 ceph-mon[80135]: pgmap v462: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:57:07 compute-1 sudo[190950]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:07 compute-1 sudo[191108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtpcylqriihvnwpimxqeufntcjdaatho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931427.3595755-2928-275617790537933/AnsiballZ_copy.py'
Nov 23 20:57:07 compute-1 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 23 20:57:07 compute-1 sudo[191108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:07 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb4003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:07.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:07 compute-1 ceph-mon[80135]: pgmap v463: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:57:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:07 : epoch 692374ba : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 20:57:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:07 : epoch 692374ba : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:57:07 compute-1 python3.9[191110]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:07 compute-1 sudo[191108]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:08 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003ef0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:08 compute-1 sudo[191260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmonbvlsiioorpykgddlxdfnxhxktmet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931428.065832-2928-66463438890384/AnsiballZ_copy.py'
Nov 23 20:57:08 compute-1 sudo[191260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:08 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:57:08 compute-1 python3.9[191262]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:08 compute-1 sudo[191260]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:08 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc0091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:08.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:08 compute-1 sudo[191412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-withdjihwfppzsgvmtoivclegavctlcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931428.6830065-2928-42703933337156/AnsiballZ_copy.py'
Nov 23 20:57:08 compute-1 sudo[191412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:09 compute-1 python3.9[191414]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:09 compute-1 sudo[191412]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:09 compute-1 sudo[191565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brzgjorzakohiwagfkfbhgersthxwdtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931429.3239982-2928-57766230156471/AnsiballZ_copy.py'
Nov 23 20:57:09 compute-1 sudo[191565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:09 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:57:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:09.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:57:09 compute-1 python3.9[191567]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:09 compute-1 sudo[191565]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:09 compute-1 ceph-mon[80135]: pgmap v464: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:57:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:10 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb4003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:10 compute-1 sudo[191719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whyrjehxgkbhiqeyqzfzaxniednukseu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931429.9503336-2928-79896479866659/AnsiballZ_copy.py'
Nov 23 20:57:10 compute-1 sudo[191719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:10 compute-1 python3.9[191721]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:10 compute-1 sudo[191719]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:10 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ca8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:10.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:10 : epoch 692374ba : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 20:57:11 compute-1 sshd-session[191746]: Invalid user solv from 161.35.133.66 port 49168
Nov 23 20:57:11 compute-1 sshd-session[191746]: Connection closed by invalid user solv 161.35.133.66 port 49168 [preauth]
Nov 23 20:57:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:11 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc0091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:11.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:11 compute-1 ceph-mon[80135]: pgmap v465: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:57:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:12 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac0026d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:12 compute-1 sudo[191874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeufdrqowlcwejomanjumequboximtvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931431.8926058-3036-238226872855130/AnsiballZ_copy.py'
Nov 23 20:57:12 compute-1 sudo[191874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.511513) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931432511557, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 546, "num_deletes": 252, "total_data_size": 894695, "memory_usage": 905776, "flush_reason": "Manual Compaction"}
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931432519750, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 411643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18001, "largest_seqno": 18542, "table_properties": {"data_size": 409057, "index_size": 622, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6605, "raw_average_key_size": 19, "raw_value_size": 403893, "raw_average_value_size": 1191, "num_data_blocks": 28, "num_entries": 339, "num_filter_entries": 339, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763931398, "oldest_key_time": 1763931398, "file_creation_time": 1763931432, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 8299 microseconds, and 3610 cpu microseconds.
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.519811) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 411643 bytes OK
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.519838) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.521345) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.521371) EVENT_LOG_v1 {"time_micros": 1763931432521364, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.521391) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 891569, prev total WAL file size 891569, number of live WAL files 2.
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.522117) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353034' seq:0, type:0; will stop at (end)
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(401KB)], [30(15MB)]
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931432522165, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 16349380, "oldest_snapshot_seqno": -1}
Nov 23 20:57:12 compute-1 python3.9[191876]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:12 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb4003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:12 compute-1 sudo[191874]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 4917 keys, 12435371 bytes, temperature: kUnknown
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931432679905, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 12435371, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12401888, "index_size": 20061, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12357, "raw_key_size": 124078, "raw_average_key_size": 25, "raw_value_size": 12312063, "raw_average_value_size": 2503, "num_data_blocks": 836, "num_entries": 4917, "num_filter_entries": 4917, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763931432, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.680109) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 12435371 bytes
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.687263) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.6 rd, 78.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 15.2 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(69.9) write-amplify(30.2) OK, records in: 5418, records dropped: 501 output_compression: NoCompression
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.687283) EVENT_LOG_v1 {"time_micros": 1763931432687274, "job": 16, "event": "compaction_finished", "compaction_time_micros": 157794, "compaction_time_cpu_micros": 52624, "output_level": 6, "num_output_files": 1, "total_output_size": 12435371, "num_input_records": 5418, "num_output_records": 4917, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931432687441, "job": 16, "event": "table_file_deletion", "file_number": 32}
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931432689857, "job": 16, "event": "table_file_deletion", "file_number": 30}
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.521995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.689930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.689937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.689939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.689941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:57:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.689942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:57:12 compute-1 sudo[191955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:57:12 compute-1 sudo[191955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:57:12 compute-1 sudo[191955]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 20:57:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:12.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 20:57:12 compute-1 sudo[192001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 20:57:12 compute-1 sudo[192001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:57:12 compute-1 sudo[192076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlvobremdsmecdvlenetugfniliuulls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931432.7149253-3036-234415116916369/AnsiballZ_copy.py'
Nov 23 20:57:12 compute-1 sudo[192076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:13 compute-1 python3.9[192078]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:13 compute-1 sudo[192076]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:13 compute-1 sudo[192001]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:13 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:57:13 compute-1 sudo[192261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agnswoqrsmlkyvikdkvslwgxjbqmgtwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931433.3375103-3036-242785401395864/AnsiballZ_copy.py'
Nov 23 20:57:13 compute-1 sudo[192261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:13 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ca80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:57:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:13.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:57:13 compute-1 python3.9[192263]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:13 compute-1 sudo[192261]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:14 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ca80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:14 compute-1 sudo[192413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oarafchkpysvvynbomabfmnbwvfcgnix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931433.960361-3036-28922332097378/AnsiballZ_copy.py'
Nov 23 20:57:14 compute-1 sudo[192413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:14 compute-1 python3.9[192415]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:14 compute-1 sudo[192413]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:14 compute-1 ceph-mon[80135]: pgmap v466: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:57:14 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:57:14 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 20:57:14 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:57:14 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:57:14 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 20:57:14 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 20:57:14 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:57:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:14 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac0026d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:14 compute-1 sudo[192565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqlfgufilbqqlpdcyrpqkghbznxtjrak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931434.549088-3036-90017818071224/AnsiballZ_copy.py'
Nov 23 20:57:14 compute-1 sudo[192565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:14.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:14 compute-1 python3.9[192567]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:15 compute-1 sudo[192565]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:15 compute-1 ceph-mon[80135]: pgmap v467: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 20:57:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:15 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb4003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:15.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:16 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ca80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:16 compute-1 sudo[192718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlbyzmtwphvetgpdmravqfoxsvkhbdpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931435.9118-3144-150396211231474/AnsiballZ_systemd.py'
Nov 23 20:57:16 compute-1 sudo[192718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:16 compute-1 python3.9[192720]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 20:57:16 compute-1 systemd[1]: Reloading.
Nov 23 20:57:16 compute-1 systemd-rc-local-generator[192748]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:57:16 compute-1 systemd-sysv-generator[192751]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:57:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:16 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ca80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:16 compute-1 systemd[1]: Starting libvirt logging daemon socket...
Nov 23 20:57:16 compute-1 systemd[1]: Listening on libvirt logging daemon socket.
Nov 23 20:57:16 compute-1 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 23 20:57:16 compute-1 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 23 20:57:16 compute-1 systemd[1]: Starting libvirt logging daemon...
Nov 23 20:57:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:16.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:16 compute-1 systemd[1]: Started libvirt logging daemon.
Nov 23 20:57:16 compute-1 sudo[192718]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:17 compute-1 sudo[192911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gaxxvbtencxrndxxittrdmojwezptaxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931437.0677104-3144-148456201350673/AnsiballZ_systemd.py'
Nov 23 20:57:17 compute-1 sudo[192911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:17 compute-1 python3.9[192913]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 20:57:17 compute-1 systemd[1]: Reloading.
Nov 23 20:57:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:17 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac0020f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:17.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:17 compute-1 systemd-rc-local-generator[192946]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:57:17 compute-1 systemd-sysv-generator[192949]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:57:17 compute-1 ceph-mon[80135]: pgmap v468: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:57:18 compute-1 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 23 20:57:18 compute-1 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 23 20:57:18 compute-1 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 23 20:57:18 compute-1 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 23 20:57:18 compute-1 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 23 20:57:18 compute-1 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 23 20:57:18 compute-1 systemd[1]: Starting libvirt nodedev daemon...
Nov 23 20:57:18 compute-1 systemd[1]: Started libvirt nodedev daemon.
Nov 23 20:57:18 compute-1 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 23 20:57:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:18 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb4003db0 fd 38 proxy ignored for local
Nov 23 20:57:18 compute-1 kernel: ganesha.nfsd[191569]: segfault at 50 ip 00007f4d85dad32e sp 00007f4d3e7fb210 error 4 in libntirpc.so.5.8[7f4d85d92000+2c000] likely on CPU 3 (core 0, socket 3)
Nov 23 20:57:18 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 20:57:18 compute-1 sudo[192911]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:18 compute-1 systemd[1]: Started Process Core Dump (PID 192979/UID 0).
Nov 23 20:57:18 compute-1 sudo[193061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 20:57:18 compute-1 sudo[193061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:57:18 compute-1 sudo[193061]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:18 compute-1 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 23 20:57:18 compute-1 sudo[193157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krhoureakzwkauldwegimjbmhazjmtny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931438.2273061-3144-141137654501350/AnsiballZ_systemd.py'
Nov 23 20:57:18 compute-1 sudo[193157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:18 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:57:18 compute-1 systemd[1]: Created slice Slice /system/dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged.
Nov 23 20:57:18 compute-1 systemd[1]: Started dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 23 20:57:18 compute-1 python3.9[193159]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 20:57:18 compute-1 systemd[1]: Reloading.
Nov 23 20:57:18 compute-1 systemd-rc-local-generator[193193]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:57:18 compute-1 systemd-sysv-generator[193197]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:57:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:57:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:18.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:57:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:57:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:57:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:57:19 compute-1 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 23 20:57:19 compute-1 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 23 20:57:19 compute-1 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 23 20:57:19 compute-1 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 23 20:57:19 compute-1 systemd[1]: Starting libvirt proxy daemon...
Nov 23 20:57:19 compute-1 systemd[1]: Started libvirt proxy daemon.
Nov 23 20:57:19 compute-1 sudo[193157]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:19 compute-1 systemd-coredump[192980]: Process 167914 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 60:
                                                    #0  0x00007f4d85dad32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 23 20:57:19 compute-1 systemd[1]: systemd-coredump@5-192979-0.service: Deactivated successfully.
Nov 23 20:57:19 compute-1 systemd[1]: systemd-coredump@5-192979-0.service: Consumed 1.128s CPU time.
Nov 23 20:57:19 compute-1 podman[193297]: 2025-11-23 20:57:19.409774106 +0000 UTC m=+0.029928721 container died d38ed78145ce27a698715b902dd179194e031801435ca90af85af498b8f8280c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 23 20:57:19 compute-1 systemd[1]: var-lib-containers-storage-overlay-dcfbd664ea36e63f9ed359c0f79e6328cde03d445ebf8fde9d5034673736dd60-merged.mount: Deactivated successfully.
Nov 23 20:57:19 compute-1 podman[193297]: 2025-11-23 20:57:19.476758438 +0000 UTC m=+0.096913033 container remove d38ed78145ce27a698715b902dd179194e031801435ca90af85af498b8f8280c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 23 20:57:19 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Main process exited, code=exited, status=139/n/a
Nov 23 20:57:19 compute-1 setroubleshoot[192978]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 13725f17-2087-483b-8286-d61f28d1887a
Nov 23 20:57:19 compute-1 setroubleshoot[192978]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 23 20:57:19 compute-1 setroubleshoot[192978]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 13725f17-2087-483b-8286-d61f28d1887a
Nov 23 20:57:19 compute-1 setroubleshoot[192978]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 23 20:57:19 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Failed with result 'exit-code'.
Nov 23 20:57:19 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.522s CPU time.
Nov 23 20:57:19 compute-1 sudo[193426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quxvskxllmkdqtznbtmlhardgdesrhci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931439.32547-3144-19300873555279/AnsiballZ_systemd.py'
Nov 23 20:57:19 compute-1 sudo[193426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:57:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:19.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:57:19 compute-1 python3.9[193428]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 20:57:19 compute-1 systemd[1]: Reloading.
Nov 23 20:57:20 compute-1 systemd-rc-local-generator[193455]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:57:20 compute-1 systemd-sysv-generator[193458]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:57:20 compute-1 systemd[1]: Listening on libvirt locking daemon socket.
Nov 23 20:57:20 compute-1 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 23 20:57:20 compute-1 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 23 20:57:20 compute-1 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 23 20:57:20 compute-1 ceph-mon[80135]: pgmap v469: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:57:20 compute-1 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 23 20:57:20 compute-1 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 23 20:57:20 compute-1 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 23 20:57:20 compute-1 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 23 20:57:20 compute-1 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 23 20:57:20 compute-1 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 23 20:57:20 compute-1 systemd[1]: Starting libvirt QEMU daemon...
Nov 23 20:57:20 compute-1 systemd[1]: Started libvirt QEMU daemon.
Nov 23 20:57:20 compute-1 sudo[193426]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:20 compute-1 sudo[193540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:57:20 compute-1 sudo[193540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:57:20 compute-1 sudo[193540]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:20 compute-1 sudo[193667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpencgcjcztxciywdgraxbkvrzktgtnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931440.4682117-3144-820452760424/AnsiballZ_systemd.py'
Nov 23 20:57:20 compute-1 sudo[193667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:20.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:21 compute-1 python3.9[193669]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 20:57:21 compute-1 systemd[1]: Reloading.
Nov 23 20:57:21 compute-1 systemd-rc-local-generator[193695]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:57:21 compute-1 systemd-sysv-generator[193699]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:57:21 compute-1 systemd[1]: Starting libvirt secret daemon socket...
Nov 23 20:57:21 compute-1 systemd[1]: Listening on libvirt secret daemon socket.
Nov 23 20:57:21 compute-1 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 23 20:57:21 compute-1 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 23 20:57:21 compute-1 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 23 20:57:21 compute-1 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 23 20:57:21 compute-1 systemd[1]: Starting libvirt secret daemon...
Nov 23 20:57:21 compute-1 systemd[1]: Started libvirt secret daemon.
Nov 23 20:57:21 compute-1 sudo[193667]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:21.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:22 compute-1 ceph-mon[80135]: pgmap v470: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:57:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:57:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:22.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:57:23 compute-1 sudo[193890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-polqurafzgxqigghrtfyfyakvvjrfwop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931443.191613-3256-93583107661717/AnsiballZ_file.py'
Nov 23 20:57:23 compute-1 sudo[193890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:23 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:57:23 compute-1 podman[193854]: 2025-11-23 20:57:23.517020706 +0000 UTC m=+0.072502639 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 23 20:57:23 compute-1 python3.9[193901]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:23 compute-1 sudo[193890]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:23.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205724 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 20:57:24 compute-1 ceph-mon[80135]: pgmap v471: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:57:24 compute-1 sudo[194053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkswvenjmrbzbwfcfpvbwdpagpovcfyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931444.1344967-3279-3103137369819/AnsiballZ_find.py'
Nov 23 20:57:24 compute-1 sudo[194053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:24 compute-1 python3.9[194055]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 23 20:57:24 compute-1 sudo[194053]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:24.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:25 compute-1 sudo[194205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cartztsllkxehintlzekvvspqrpecrjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931444.9255283-3303-70528645060447/AnsiballZ_command.py'
Nov 23 20:57:25 compute-1 sudo[194205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:25 compute-1 python3.9[194207]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:57:25 compute-1 sudo[194205]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:25.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:26 compute-1 ceph-mon[80135]: pgmap v472: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:57:26 compute-1 python3.9[194362]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 23 20:57:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:26.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:27 compute-1 python3.9[194512]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:57:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:27.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:27 compute-1 python3.9[194634]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931446.9392507-3360-237808396368484/.source.xml follow=False _original_basename=secret.xml.j2 checksum=2095b2efdb764c083af64051baa9ed5d4618fea0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:28 compute-1 ceph-mon[80135]: pgmap v473: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 23 20:57:28 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:57:28 compute-1 sudo[194784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpnhsjwmakjucexlglajepgehgbivlkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931448.378376-3405-269874069368547/AnsiballZ_command.py'
Nov 23 20:57:28 compute-1 sudo[194784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:28 compute-1 python3.9[194786]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 03808be8-ae4a-5548-82e6-4a294f1bc627
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:57:28 compute-1 polkitd[43500]: Registered Authentication Agent for unix-process:194788:339745 (system bus name :1.1860 [pkttyagent --process 194788 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 23 20:57:28 compute-1 polkitd[43500]: Unregistered Authentication Agent for unix-process:194788:339745 (system bus name :1.1860, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 23 20:57:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:28.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:28 compute-1 polkitd[43500]: Registered Authentication Agent for unix-process:194787:339745 (system bus name :1.1861 [pkttyagent --process 194787 --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 23 20:57:28 compute-1 polkitd[43500]: Unregistered Authentication Agent for unix-process:194787:339745 (system bus name :1.1861, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 23 20:57:28 compute-1 sudo[194784]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:29 compute-1 systemd[1]: dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 23 20:57:29 compute-1 systemd[1]: dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.003s CPU time.
Nov 23 20:57:29 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Scheduled restart job, restart counter is at 6.
Nov 23 20:57:29 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 20:57:29 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.522s CPU time.
Nov 23 20:57:29 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 20:57:29 compute-1 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 23 20:57:29 compute-1 python3.9[194949]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:29.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:29 compute-1 podman[195014]: 2025-11-23 20:57:29.878510137 +0000 UTC m=+0.043872438 container create 53986badd315b38d8b9fa281241deaae5f5b036f9383287bb4abe40b27adebd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 23 20:57:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fff88ddf62e59bbaca93d42aba99bc0cdc0c8fa1af4ad77cb6d0566221c0570/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 20:57:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fff88ddf62e59bbaca93d42aba99bc0cdc0c8fa1af4ad77cb6d0566221c0570/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 20:57:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fff88ddf62e59bbaca93d42aba99bc0cdc0c8fa1af4ad77cb6d0566221c0570/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 20:57:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fff88ddf62e59bbaca93d42aba99bc0cdc0c8fa1af4ad77cb6d0566221c0570/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.fuxuha-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 20:57:29 compute-1 podman[195014]: 2025-11-23 20:57:29.931612815 +0000 UTC m=+0.096975136 container init 53986badd315b38d8b9fa281241deaae5f5b036f9383287bb4abe40b27adebd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 23 20:57:29 compute-1 podman[195014]: 2025-11-23 20:57:29.938817991 +0000 UTC m=+0.104180292 container start 53986badd315b38d8b9fa281241deaae5f5b036f9383287bb4abe40b27adebd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 20:57:29 compute-1 bash[195014]: 53986badd315b38d8b9fa281241deaae5f5b036f9383287bb4abe40b27adebd8
Nov 23 20:57:29 compute-1 podman[195014]: 2025-11-23 20:57:29.857443892 +0000 UTC m=+0.022806213 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:57:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:29 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 20:57:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:29 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 20:57:29 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 20:57:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:29 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 20:57:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:29 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 20:57:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:29 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 20:57:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:29 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 20:57:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:29 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 20:57:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:30 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 20:57:30 compute-1 ceph-mon[80135]: pgmap v474: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 23 20:57:30 compute-1 sudo[195203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuxffngkrgzatmjezhphrmnclozkskdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931450.1295228-3453-124346946444872/AnsiballZ_command.py'
Nov 23 20:57:30 compute-1 sudo[195203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:30 compute-1 sudo[195203]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:30.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:31 compute-1 sudo[195356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttpixykbmrphehefusnpxfpymoywgtto ; FSID=03808be8-ae4a-5548-82e6-4a294f1bc627 KEY=AQC3cCNpAAAAABAAlqLdZNvpAVdz4ESvQvzNnA== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931451.0462835-3477-91774238114839/AnsiballZ_command.py'
Nov 23 20:57:31 compute-1 sudo[195356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:31 compute-1 polkitd[43500]: Registered Authentication Agent for unix-process:195360:340022 (system bus name :1.1864 [pkttyagent --process 195360 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 23 20:57:31 compute-1 polkitd[43500]: Unregistered Authentication Agent for unix-process:195360:340022 (system bus name :1.1864, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 23 20:57:31 compute-1 sudo[195356]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:31.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:32 compute-1 sudo[195515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdorpypiyyfsdzackdausyzeqyowlepv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931452.0043197-3501-184017067112371/AnsiballZ_copy.py'
Nov 23 20:57:32 compute-1 sudo[195515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:32 compute-1 ceph-mon[80135]: pgmap v475: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:57:32 compute-1 python3.9[195517]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:32 compute-1 sudo[195515]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:32.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:33 compute-1 sudo[195682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hddxrlinchjtfxqmeleaejemfalvsrde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931452.8606696-3525-209651606109064/AnsiballZ_stat.py'
Nov 23 20:57:33 compute-1 sudo[195682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:33 compute-1 podman[195641]: 2025-11-23 20:57:33.204666415 +0000 UTC m=+0.079189339 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 20:57:33 compute-1 python3.9[195687]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:57:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:57:33 compute-1 sudo[195682]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:33 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:57:33 compute-1 sudo[195817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teajxsldykbneomrbsvlwtquafftnple ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931452.8606696-3525-209651606109064/AnsiballZ_copy.py'
Nov 23 20:57:33 compute-1 sudo[195817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:33.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:33 compute-1 python3.9[195819]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931452.8606696-3525-209651606109064/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:33 compute-1 sudo[195817]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:34 compute-1 ceph-mon[80135]: pgmap v476: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:57:34 compute-1 sudo[195969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbwnkraasajgzmlcugnqfmrmolyueozz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931454.5701232-3573-235869911224976/AnsiballZ_file.py'
Nov 23 20:57:34 compute-1 sudo[195969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:57:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:34.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:57:35 compute-1 python3.9[195971]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:35 compute-1 sudo[195969]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:35 compute-1 sudo[196122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpofrugpfsuyreejjcocreibbvbdwtub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931455.3955934-3597-272092648290951/AnsiballZ_stat.py'
Nov 23 20:57:35 compute-1 sudo[196122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:35.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:35 compute-1 python3.9[196124]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:57:35 compute-1 sudo[196122]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:36 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 20:57:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:36 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:57:36 compute-1 sudo[196200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldbekvkkezlecemjjazxefuyitebfllx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931455.3955934-3597-272092648290951/AnsiballZ_file.py'
Nov 23 20:57:36 compute-1 sudo[196200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:36 compute-1 python3.9[196202]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:36 compute-1 sudo[196200]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:36 compute-1 ceph-mon[80135]: pgmap v477: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Nov 23 20:57:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:36.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:37 compute-1 sudo[196352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjufqmtsqrojzivsiswgficbxnqgycgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931456.8319764-3633-252137431198449/AnsiballZ_stat.py'
Nov 23 20:57:37 compute-1 sudo[196352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:37 compute-1 python3.9[196354]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:57:37 compute-1 sudo[196352]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:37 compute-1 sudo[196431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgtbtkdeopulxonypgpuhmxkantnnefr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931456.8319764-3633-252137431198449/AnsiballZ_file.py'
Nov 23 20:57:37 compute-1 sudo[196431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:37 compute-1 python3.9[196433]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.13jp45ov recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:37 compute-1 sudo[196431]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:37.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:38 compute-1 ceph-mon[80135]: pgmap v478: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Nov 23 20:57:38 compute-1 sudo[196583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfgzmfxknjziukavpqrtthstnblumcpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931458.175654-3669-181284597361250/AnsiballZ_stat.py'
Nov 23 20:57:38 compute-1 sudo[196583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:38 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:57:38 compute-1 python3.9[196585]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:57:38 compute-1 sudo[196583]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 20:57:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:38.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 20:57:38 compute-1 sudo[196661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmcvwurvhogppjblkqkvzfsdxgfttseh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931458.175654-3669-181284597361250/AnsiballZ_file.py'
Nov 23 20:57:38 compute-1 sudo[196661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:39 compute-1 python3.9[196663]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:39 compute-1 sudo[196661]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 20:57:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:39.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 20:57:39 compute-1 sudo[196814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcasjibmlrvgebtzjnoxeeycfcykegwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931459.5820575-3709-98981534299134/AnsiballZ_command.py'
Nov 23 20:57:39 compute-1 sudo[196814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:40 compute-1 python3.9[196816]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:57:40 compute-1 sudo[196814]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:40 compute-1 ceph-mon[80135]: pgmap v479: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Nov 23 20:57:40 compute-1 sudo[196888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:57:40 compute-1 sudo[196888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:57:40 compute-1 sudo[196888]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:57:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:40.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:57:40 compute-1 sudo[196992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzoywvmtaujnkdrdhhbaslwjbxgcrnfq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763931460.508685-3732-264923189862154/AnsiballZ_edpm_nftables_from_files.py'
Nov 23 20:57:40 compute-1 sudo[196992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:41 compute-1 python3[196994]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 23 20:57:41 compute-1 sudo[196992]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:41.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:41 compute-1 sudo[197145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivoutgfbsepphzckngscxychzenovecl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931461.5275285-3756-177731561718554/AnsiballZ_stat.py'
Nov 23 20:57:41 compute-1 sudo[197145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:41 compute-1 python3.9[197147]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:57:42 compute-1 sudo[197145]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 20:57:42 compute-1 sudo[197234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvtbenoobqhfvbjlditnxkhbkgnymbfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931461.5275285-3756-177731561718554/AnsiballZ_file.py'
Nov 23 20:57:42 compute-1 sudo[197234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:42 compute-1 ceph-mon[80135]: pgmap v480: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Nov 23 20:57:42 compute-1 python3.9[197236]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:42 compute-1 sudo[197234]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b0c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:42.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:43 compute-1 sudo[197389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ityrjvbytmesflocpxftuqxeqlxkstfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931463.0120702-3792-205425622821277/AnsiballZ_stat.py'
Nov 23 20:57:43 compute-1 sudo[197389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:43 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:57:43 compute-1 python3.9[197391]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:57:43 compute-1 sudo[197389]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:43 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af80016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205743 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 20:57:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:43.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:43 compute-1 sudo[197468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujcyvfwgxwhsnosakmmbkithsbqockzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931463.0120702-3792-205425622821277/AnsiballZ_file.py'
Nov 23 20:57:43 compute-1 sudo[197468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:43 compute-1 python3.9[197470]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:43 compute-1 sudo[197468]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:44 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:44 compute-1 ceph-mon[80135]: pgmap v481: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 20:57:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:44 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:44.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:45 compute-1 sudo[197620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sihvfvxxleisgfjfkcepokbjwfbtxtag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931464.8030636-3828-28722033965469/AnsiballZ_stat.py'
Nov 23 20:57:45 compute-1 sudo[197620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:45 compute-1 python3.9[197622]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:57:45 compute-1 sudo[197620]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:45 compute-1 sudo[197699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haifbgiymyejzspivphjvvnrzvwugijf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931464.8030636-3828-28722033965469/AnsiballZ_file.py'
Nov 23 20:57:45 compute-1 sudo[197699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:45 compute-1 python3.9[197701]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:45 compute-1 sudo[197699]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:45 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 20:57:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:45.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 20:57:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205746 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 20:57:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:46 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af80016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:46 compute-1 ceph-mon[80135]: pgmap v482: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 20:57:46 compute-1 sudo[197851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crmoecqajvhvydnugezxbdqhbxclsemt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931466.2526946-3864-65900021248223/AnsiballZ_stat.py'
Nov 23 20:57:46 compute-1 sudo[197851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:46 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af80016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:46 compute-1 python3.9[197853]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:57:46 compute-1 sudo[197851]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 20:57:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:46.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 20:57:46 compute-1 sudo[197931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zphjmzrxhczfyphdmtxbvuludmsdcfgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931466.2526946-3864-65900021248223/AnsiballZ_file.py'
Nov 23 20:57:47 compute-1 sudo[197931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:47 compute-1 python3.9[197933]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:47 compute-1 sudo[197931]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:47 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b040023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:47.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:48 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:48 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:48 compute-1 sudo[198084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxoqfvrzfjfdsaucbgazrltvgbxbalof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931467.7226827-3900-125356492517370/AnsiballZ_stat.py'
Nov 23 20:57:48 compute-1 sudo[198084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:48 compute-1 sshd-session[197923]: Received disconnect from 118.145.189.160 port 47606:11: Bye Bye [preauth]
Nov 23 20:57:48 compute-1 sshd-session[197923]: Disconnected from authenticating user root 118.145.189.160 port 47606 [preauth]
Nov 23 20:57:48 compute-1 python3.9[198086]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:57:48 compute-1 sudo[198084]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:48 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:57:48 compute-1 ceph-mon[80135]: pgmap v483: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 20:57:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:57:48 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:48 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:48 compute-1 sudo[198209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnaqkteszesugnvdedgjntdhsliuzlvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931467.7226827-3900-125356492517370/AnsiballZ_copy.py'
Nov 23 20:57:48 compute-1 sudo[198209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:48.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:49 compute-1 python3.9[198211]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931467.7226827-3900-125356492517370/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:49 compute-1 sudo[198209]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:49 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af80016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:49.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:49 compute-1 sudo[198362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfyownmohqwhxwqhqaxgdqftymyrvnhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931469.6246114-3945-80714297767494/AnsiballZ_file.py'
Nov 23 20:57:49 compute-1 sudo[198362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:50 compute-1 python3.9[198364]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:50 compute-1 sudo[198362]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:50 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b040023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:50 compute-1 ceph-mon[80135]: pgmap v484: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 20:57:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:50 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:50 compute-1 sudo[198514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmkdztlkghowajgfrxgobytevklmslxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931470.453588-3969-96277873414153/AnsiballZ_command.py'
Nov 23 20:57:50 compute-1 sudo[198514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:50 compute-1 python3.9[198516]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:57:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:50.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:50 compute-1 sudo[198514]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:57:51.053 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 20:57:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:57:51.054 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 20:57:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:57:51.054 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 20:57:51 compute-1 ceph-mon[80135]: pgmap v485: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Nov 23 20:57:51 compute-1 sudo[198670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfligqmhfhmsltjrenpvydluahjlplzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931471.3430314-3993-179174853958649/AnsiballZ_blockinfile.py'
Nov 23 20:57:51 compute-1 sudo[198670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:51 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:51.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:51 compute-1 python3.9[198672]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:51 compute-1 sudo[198670]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:52 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af80016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:52 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b040023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:52 compute-1 sudo[198822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzzjbpnhavmxgkcoyvfptervyvqjzgbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931472.5062373-4020-119833824220986/AnsiballZ_command.py'
Nov 23 20:57:52 compute-1 sudo[198822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:52 compute-1 python3.9[198824]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:57:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:52.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:52 compute-1 sudo[198822]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:53 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:57:53 compute-1 sudo[198989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flfwazzrcreeovqxobivyokesgujnbau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931473.356007-4044-254451725268152/AnsiballZ_stat.py'
Nov 23 20:57:53 compute-1 sudo[198989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:53 compute-1 podman[198950]: 2025-11-23 20:57:53.658669956 +0000 UTC m=+0.074399819 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 20:57:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:53 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:53.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:53 compute-1 python3.9[198997]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:57:53 compute-1 sudo[198989]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:53 compute-1 ceph-mon[80135]: pgmap v486: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:57:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:54 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:54 compute-1 sudo[199149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umjbwxvmrxybbwdnwgxzmwswnqymhztr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931474.2330744-4068-22362376589935/AnsiballZ_command.py'
Nov 23 20:57:54 compute-1 sudo[199149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:54 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af80016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:54 compute-1 python3.9[199151]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:57:54 compute-1 sudo[199149]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:54.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:55 compute-1 sudo[199304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhugkxzwgbaxvdbabqyirkiyqmdmiieo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931475.0794673-4092-132231512171072/AnsiballZ_file.py'
Nov 23 20:57:55 compute-1 sudo[199304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:55 compute-1 python3.9[199306]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:55 compute-1 sudo[199304]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:55 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b040034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:55.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:56 compute-1 ceph-mon[80135]: pgmap v487: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:57:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:56 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:56 compute-1 sudo[199457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eidqfjusayjancznqoyivomagnjabbio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931475.9867065-4116-33671384725885/AnsiballZ_stat.py'
Nov 23 20:57:56 compute-1 sudo[199457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:56 compute-1 python3.9[199459]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:57:56 compute-1 sudo[199457]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:56 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:56 compute-1 sudo[199580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nykxwmgodbwwzjhaflzzvubizoxzetoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931475.9867065-4116-33671384725885/AnsiballZ_copy.py'
Nov 23 20:57:56 compute-1 sudo[199580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:56.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:57 compute-1 python3.9[199582]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931475.9867065-4116-33671384725885/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:57 compute-1 sudo[199580]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:57 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af80016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:57.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:57 compute-1 sudo[199733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iheocnlzmbmkbymkvzzmgdasvcdvzvbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931477.5665505-4161-236484583150557/AnsiballZ_stat.py'
Nov 23 20:57:57 compute-1 sudo[199733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:58 compute-1 ceph-mon[80135]: pgmap v488: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:57:58 compute-1 python3.9[199735]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:57:58 compute-1 sudo[199733]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:58 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b040034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:58 compute-1 sudo[199856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pulisnjwonfruutebancfgmehsvdjxiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931477.5665505-4161-236484583150557/AnsiballZ_copy.py'
Nov 23 20:57:58 compute-1 sudo[199856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:58 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:57:58 compute-1 python3.9[199858]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931477.5665505-4161-236484583150557/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:57:58 compute-1 sudo[199856]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:58 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:58.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:59 compute-1 sudo[200008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuoifpesnpscwioqcxovbfsgctzvizkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931479.0427454-4207-59424836291676/AnsiballZ_stat.py'
Nov 23 20:57:59 compute-1 sudo[200008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:57:59 compute-1 python3.9[200010]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:57:59 compute-1 sudo[200008]: pam_unix(sudo:session): session closed for user root
Nov 23 20:57:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:59 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae80032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:57:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:57:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:57:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:59.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:57:59 compute-1 sudo[200132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaxoalrnzgluyyfexcqpmjdcpnzxzjty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931479.0427454-4207-59424836291676/AnsiballZ_copy.py'
Nov 23 20:57:59 compute-1 sudo[200132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:00 compute-1 python3.9[200134]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931479.0427454-4207-59424836291676/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:58:00 compute-1 ceph-mon[80135]: pgmap v489: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:58:00 compute-1 sudo[200132]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:00 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af80016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:00 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b040034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:00 compute-1 sudo[200234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:58:00 compute-1 sudo[200234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:58:00 compute-1 sudo[200234]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:00 compute-1 sudo[200309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrufiejdnwhwdaqrcnozuhltecppkvxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931480.5170946-4251-186056617402563/AnsiballZ_systemd.py'
Nov 23 20:58:00 compute-1 sudo[200309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:00.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:01 compute-1 python3.9[200311]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:58:01 compute-1 systemd[1]: Reloading.
Nov 23 20:58:01 compute-1 systemd-rc-local-generator[200339]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:58:01 compute-1 systemd-sysv-generator[200342]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:58:01 compute-1 systemd[1]: Reached target edpm_libvirt.target.
Nov 23 20:58:01 compute-1 sudo[200309]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:01 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 20:58:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:01.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 20:58:02 compute-1 ceph-mon[80135]: pgmap v490: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:58:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:02 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae80032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:02 compute-1 sudo[200501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjewpyvtakdwtetyuzwnzmgybtjhshdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931481.9018147-4275-12545468439088/AnsiballZ_systemd.py'
Nov 23 20:58:02 compute-1 sudo[200501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:02 compute-1 python3.9[200503]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 23 20:58:02 compute-1 systemd[1]: Reloading.
Nov 23 20:58:02 compute-1 systemd-rc-local-generator[200528]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:58:02 compute-1 systemd-sysv-generator[200531]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:58:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:02 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af80016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:02 compute-1 systemd[1]: Reloading.
Nov 23 20:58:02 compute-1 systemd-rc-local-generator[200567]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:58:02 compute-1 systemd-sysv-generator[200571]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:58:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:02.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:03 compute-1 sudo[200501]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:03 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:58:03 compute-1 podman[200600]: 2025-11-23 20:58:03.680081453 +0000 UTC m=+0.092956455 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 23 20:58:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:03 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b040034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:03.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:03 compute-1 sshd-session[142384]: Connection closed by 192.168.122.30 port 38926
Nov 23 20:58:03 compute-1 sshd-session[142381]: pam_unix(sshd:session): session closed for user zuul
Nov 23 20:58:03 compute-1 systemd[1]: session-52.scope: Deactivated successfully.
Nov 23 20:58:03 compute-1 systemd[1]: session-52.scope: Consumed 3min 14.618s CPU time.
Nov 23 20:58:03 compute-1 systemd-logind[793]: Session 52 logged out. Waiting for processes to exit.
Nov 23 20:58:03 compute-1 systemd-logind[793]: Removed session 52.
Nov 23 20:58:04 compute-1 ceph-mon[80135]: pgmap v491: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:58:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:58:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:04 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:04 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:04.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:05 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:05.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:06 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b040034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:06 compute-1 ceph-mon[80135]: pgmap v492: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 20:58:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:06 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:06.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:07 compute-1 ceph-mon[80135]: pgmap v493: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:58:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:07 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:07.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:08 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af8003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:08 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:58:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:08 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b040034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:08.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:09 compute-1 sshd-session[200628]: Accepted publickey for zuul from 192.168.122.30 port 37818 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 20:58:09 compute-1 systemd-logind[793]: New session 53 of user zuul.
Nov 23 20:58:09 compute-1 systemd[1]: Started Session 53 of User zuul.
Nov 23 20:58:09 compute-1 sshd-session[200628]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 20:58:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:09 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b040034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:09.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:09 compute-1 ceph-mon[80135]: pgmap v494: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:58:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:10 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:10 compute-1 python3.9[200782]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:58:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:10 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af8003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:10.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:11 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af8003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:11.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:12 compute-1 ceph-mon[80135]: pgmap v495: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:58:12 compute-1 sshd-session[200811]: Invalid user jose from 102.176.81.29 port 44472
Nov 23 20:58:12 compute-1 python3.9[200939]: ansible-ansible.builtin.service_facts Invoked
Nov 23 20:58:12 compute-1 network[200956]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 20:58:12 compute-1 network[200957]: 'network-scripts' will be removed from distribution in near future.
Nov 23 20:58:12 compute-1 network[200958]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 20:58:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:12 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b040034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:12 compute-1 sshd-session[200811]: Received disconnect from 102.176.81.29 port 44472:11: Bye Bye [preauth]
Nov 23 20:58:12 compute-1 sshd-session[200811]: Disconnected from invalid user jose 102.176.81.29 port 44472 [preauth]
Nov 23 20:58:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:12 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:12.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:13 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:58:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205813 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 20:58:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:13 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:13.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:14 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:14 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ad8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:14.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:15 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af8003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:15 compute-1 ceph-mon[80135]: pgmap v496: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:58:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:15.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:16 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:16 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:16 compute-1 ceph-mon[80135]: pgmap v497: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:58:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:58:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:16.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:58:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:17 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af8003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 20:58:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:17.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 20:58:17 compute-1 ceph-mon[80135]: pgmap v498: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:58:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:18 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af8003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:18 compute-1 sudo[201233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odomuujeeabttylzpipnfcufjfsgesvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931497.9135146-102-133467138390306/AnsiballZ_setup.py'
Nov 23 20:58:18 compute-1 sudo[201233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:18 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:58:18 compute-1 sudo[201236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:58:18 compute-1 sudo[201236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:58:18 compute-1 sudo[201236]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:18 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:18 compute-1 sudo[201261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 20:58:18 compute-1 sudo[201261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:58:18 compute-1 python3.9[201235]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 20:58:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:18.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:19 compute-1 sudo[201233]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:19 compute-1 sudo[201261]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:19 compute-1 sudo[201397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpjwghjtqdfkvkhvlvcsxhksncuqveph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931497.9135146-102-133467138390306/AnsiballZ_dnf.py'
Nov 23 20:58:19 compute-1 sudo[201397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:19 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 20:58:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:19.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 20:58:19 compute-1 python3.9[201399]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 20:58:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:20 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ad80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:20 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af8003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:20 compute-1 sudo[201402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:58:20 compute-1 sudo[201402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:58:20 compute-1 sudo[201402]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:20.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:21 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:21.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:22 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:58:22 compute-1 ceph-mon[80135]: pgmap v499: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:58:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:22 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:22 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ad8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:22.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:23 compute-1 ceph-mon[80135]: pgmap v500: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:58:23 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:58:23 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 20:58:23 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:58:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:23 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af8003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:23.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:24 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:24 compute-1 ceph-mon[80135]: pgmap v501: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:58:24 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:58:24 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:58:24 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 20:58:24 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 20:58:24 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:58:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:24 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:24 compute-1 podman[201429]: 2025-11-23 20:58:24.634599341 +0000 UTC m=+0.051546557 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 20:58:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:24.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:25 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ad8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:25.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:26 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af80037a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:26 compute-1 ceph-mon[80135]: pgmap v502: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:58:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:26 : epoch 69237539 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 20:58:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:26 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:26 compute-1 sudo[201397]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:58:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:26.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:58:27 compute-1 sudo[201600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbhiqnxwaufnrntvccgkxbbmbblpyhba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931507.0709248-138-6276463240108/AnsiballZ_stat.py'
Nov 23 20:58:27 compute-1 sudo[201600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:27 compute-1 python3.9[201602]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:58:27 compute-1 sudo[201600]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:27 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:27.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:28 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ad8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:28 compute-1 sudo[201627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 20:58:28 compute-1 sudo[201627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:58:28 compute-1 sudo[201627]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:28 compute-1 ceph-mon[80135]: pgmap v503: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:58:28 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:58:28 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:58:28 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:58:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:28 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af80037c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:28 compute-1 sudo[201777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwxqvgagpggvyrnwyiaypohfzawkpnie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931508.2679434-168-108965037867501/AnsiballZ_command.py'
Nov 23 20:58:28 compute-1 sudo[201777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:28 compute-1 python3.9[201779]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:58:28 compute-1 sudo[201777]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:28.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:29 : epoch 69237539 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 20:58:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:29 : epoch 69237539 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:58:29 compute-1 sudo[201931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sakimxueumxymracrqupnkhqmcalndrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931509.4083748-198-249845502100237/AnsiballZ_stat.py'
Nov 23 20:58:29 compute-1 sudo[201931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:29 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:29 compute-1 python3.9[201933]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:58:29 compute-1 sudo[201931]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:29.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:30 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:30 compute-1 sudo[202083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkdedxhrbeicuyscylksyoivjedledij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931510.139523-222-110041846078906/AnsiballZ_command.py'
Nov 23 20:58:30 compute-1 sudo[202083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:30 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ad80032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:30 compute-1 python3.9[202085]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:58:30 compute-1 sudo[202083]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:30 compute-1 ceph-mon[80135]: pgmap v504: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:58:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:30.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:31 compute-1 sudo[202236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yguiywziidtvyvvjdoculrbdnpsdznav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931511.0419207-246-40530868588055/AnsiballZ_stat.py'
Nov 23 20:58:31 compute-1 sudo[202236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:31 compute-1 python3.9[202238]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:58:31 compute-1 sudo[202236]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:31 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af80037e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:31 compute-1 ceph-mon[80135]: pgmap v505: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 20:58:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:58:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:31.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:58:31 compute-1 sudo[202360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngbuvfueehgotavbpkhrswhdnipmdqus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931511.0419207-246-40530868588055/AnsiballZ_copy.py'
Nov 23 20:58:31 compute-1 sudo[202360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:32 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:32 compute-1 python3.9[202362]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931511.0419207-246-40530868588055/.source.iscsi _original_basename=.yo4xdt8g follow=False checksum=42fe1ad2782de6c869e598a65c6917a7cbe14437 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:58:32 compute-1 sudo[202360]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:32 : epoch 69237539 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 20:58:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:32 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:32 compute-1 sudo[202512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qidjbwheebhjzpmqygabbzsytryvzfdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931512.5251155-291-246373394911858/AnsiballZ_file.py'
Nov 23 20:58:32 compute-1 sudo[202512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:32.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:33 compute-1 python3.9[202514]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:58:33 compute-1 sudo[202512]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:33 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:58:33 compute-1 sudo[202676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjhsbjpshylmlwoqyjmyvkqxanhlehdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931513.4011364-315-48864478803101/AnsiballZ_lineinfile.py'
Nov 23 20:58:33 compute-1 sudo[202676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:33 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ad80032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:33.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:33 compute-1 podman[202639]: 2025-11-23 20:58:33.881852133 +0000 UTC m=+0.122448030 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 23 20:58:34 compute-1 ceph-mon[80135]: pgmap v506: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 20:58:34 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:58:34 compute-1 python3.9[202684]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:58:34 compute-1 sudo[202676]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:34 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:34 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af8003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:34 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:34 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 20:58:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:34.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 20:58:35 compute-1 sudo[202843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yazdgencxgcnjelhuadtrdxvqwvuqixa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931514.4922986-342-114758646781849/AnsiballZ_systemd_service.py'
Nov 23 20:58:35 compute-1 sudo[202843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:35 compute-1 python3.9[202845]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:58:35 compute-1 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 23 20:58:35 compute-1 sudo[202843]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:35 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:58:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:35.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:58:36 compute-1 ceph-mon[80135]: pgmap v507: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 20:58:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:36 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ad8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:36 compute-1 sudo[203000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llhdrxnogbwulesjwzbnmebndduekryz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931515.8384027-366-226907104746297/AnsiballZ_systemd_service.py'
Nov 23 20:58:36 compute-1 sudo[203000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:36 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af8003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:36 compute-1 python3.9[203002]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:58:36 compute-1 systemd[1]: Reloading.
Nov 23 20:58:36 compute-1 systemd-rc-local-generator[203031]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:58:36 compute-1 systemd-sysv-generator[203036]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:58:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:36.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:37 compute-1 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 23 20:58:37 compute-1 systemd[1]: Starting Open-iSCSI...
Nov 23 20:58:37 compute-1 kernel: Loading iSCSI transport class v2.0-870.
Nov 23 20:58:37 compute-1 systemd[1]: Started Open-iSCSI.
Nov 23 20:58:37 compute-1 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 23 20:58:37 compute-1 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 23 20:58:37 compute-1 sudo[203000]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205837 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 20:58:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:37 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:37.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:38 compute-1 sudo[203201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njxzwghlalsxzrvvolfgpjpnulglgmoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931517.7441864-399-127619398344698/AnsiballZ_service_facts.py'
Nov 23 20:58:38 compute-1 sudo[203201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:38 compute-1 ceph-mon[80135]: pgmap v508: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 20:58:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:38 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:38 compute-1 python3.9[203203]: ansible-ansible.builtin.service_facts Invoked
Nov 23 20:58:38 compute-1 network[203220]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 20:58:38 compute-1 network[203221]: 'network-scripts' will be removed from distribution in near future.
Nov 23 20:58:38 compute-1 network[203222]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 20:58:38 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:58:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:38 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:38.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:39 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af8003840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:39.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:40 compute-1 ceph-mon[80135]: pgmap v509: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 20:58:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:40 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:40 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:40 compute-1 sudo[203333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:58:40 compute-1 sudo[203333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:58:40 compute-1 sudo[203333]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 20:58:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:40.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 20:58:41 compute-1 sudo[203201]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:41 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ad8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:41.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:42 compute-1 ceph-mon[80135]: pgmap v510: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 20:58:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 20:58:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:42.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 20:58:43 compute-1 sudo[203519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biigjfjgqqurszncsopaelsvpqdfxcei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931523.1891098-429-67854354178336/AnsiballZ_file.py'
Nov 23 20:58:43 compute-1 sudo[203519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:43 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:58:43 compute-1 python3.9[203521]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 20:58:43 compute-1 sudo[203519]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:43 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:58:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:43.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:58:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:44 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ad8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:44 compute-1 ceph-mon[80135]: pgmap v511: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 20:58:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:44 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:44 compute-1 sudo[203673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npwbjrwgpcgxbdfftenwiphsvhaphppq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931524.3622992-453-191912246516909/AnsiballZ_modprobe.py'
Nov 23 20:58:44 compute-1 sudo[203673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:44.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:45 compute-1 python3.9[203675]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 23 20:58:45 compute-1 sudo[203673]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:45 compute-1 sudo[203830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntusolmqibcnusfdibhrogtcxjqqkxtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931525.345442-477-214486405983401/AnsiballZ_stat.py'
Nov 23 20:58:45 compute-1 sudo[203830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:45 compute-1 python3.9[203832]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:58:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:45 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:45 compute-1 sudo[203830]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:45.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:46 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:46 compute-1 sudo[203954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfchliypytucmtsnywxuhimprejmkzqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931525.345442-477-214486405983401/AnsiballZ_copy.py'
Nov 23 20:58:46 compute-1 sudo[203954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:46 compute-1 python3.9[203956]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931525.345442-477-214486405983401/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:58:46 compute-1 sudo[203954]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:46 compute-1 ceph-mon[80135]: pgmap v512: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 20:58:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:46 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:46.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:47 compute-1 sudo[204106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfkhybzbnnngthuvoeamsnffnorfeljo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931527.013258-525-6748363602011/AnsiballZ_lineinfile.py'
Nov 23 20:58:47 compute-1 sudo[204106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:47 compute-1 python3.9[204108]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:58:47 compute-1 sudo[204106]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:47 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 20:58:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:47.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 20:58:48 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:48 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:48 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:58:48 compute-1 ceph-mon[80135]: pgmap v513: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:58:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:58:48 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:48 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:48 compute-1 sudo[204259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbrluxqibsuoqapnjzmgmdrqxmqqdnjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931528.0968337-549-149079342364956/AnsiballZ_systemd.py'
Nov 23 20:58:48 compute-1 sudo[204259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:48 compute-1 python3.9[204261]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 20:58:48 compute-1 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 23 20:58:48 compute-1 systemd[1]: Stopped Load Kernel Modules.
Nov 23 20:58:48 compute-1 systemd[1]: Stopping Load Kernel Modules...
Nov 23 20:58:48 compute-1 systemd[1]: Starting Load Kernel Modules...
Nov 23 20:58:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 20:58:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:48.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 20:58:48 compute-1 systemd[1]: Finished Load Kernel Modules.
Nov 23 20:58:49 compute-1 sudo[204259]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:49 compute-1 sudo[204416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atctkuqfrsiypmfpvzpjfvohihflnnnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931529.3674853-573-30629914029487/AnsiballZ_file.py'
Nov 23 20:58:49 compute-1 sudo[204416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:49 compute-1 python3.9[204418]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:58:49 compute-1 sudo[204416]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:49 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:49.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:50 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:50 compute-1 ceph-mon[80135]: pgmap v514: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:58:50 compute-1 sudo[204568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nprwrqmupnnbkchxoynmcetksbjhqjyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931530.3085501-600-147406809543140/AnsiballZ_stat.py'
Nov 23 20:58:50 compute-1 sudo[204568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:50 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:50 compute-1 python3.9[204570]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:58:50 compute-1 sudo[204568]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:50.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:58:51.054 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 20:58:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:58:51.055 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 20:58:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:58:51.055 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 20:58:51 compute-1 ceph-mon[80135]: pgmap v515: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:58:51 compute-1 sudo[204721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eonprdbgrrapwrnnfdzizjcanhpofmgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931531.3660524-627-145654781820475/AnsiballZ_stat.py'
Nov 23 20:58:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:51 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:51 compute-1 sudo[204721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:51.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:52 compute-1 python3.9[204723]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:58:52 compute-1 sudo[204721]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:52 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:52 compute-1 sudo[204873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imzyyaubxhwaznrrymfhucwcciktzfml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931532.3490767-651-218517626441705/AnsiballZ_stat.py'
Nov 23 20:58:52 compute-1 sudo[204873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:52 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:52 compute-1 python3.9[204875]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:58:52 compute-1 sudo[204873]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 20:58:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:52.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 20:58:53 compute-1 sudo[204996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkcwhhduwocsnzigqoaguofvhazgrbhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931532.3490767-651-218517626441705/AnsiballZ_copy.py'
Nov 23 20:58:53 compute-1 sudo[204996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:53 compute-1 python3.9[204998]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931532.3490767-651-218517626441705/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:58:53 compute-1 sudo[204996]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:53 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:58:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:53 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:53.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:54 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:54 compute-1 sudo[205149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-napnrmyfjjuzrtcwfdphcprarukowbjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931533.8514225-696-173621830931511/AnsiballZ_command.py'
Nov 23 20:58:54 compute-1 sudo[205149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:54 compute-1 python3.9[205151]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:58:54 compute-1 sudo[205149]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:54 compute-1 ceph-mon[80135]: pgmap v516: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:58:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:54 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:54.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:54 compute-1 sudo[205315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llcgsexbferlftirbvbhyupvgagxtorv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931534.7260532-720-146371087836639/AnsiballZ_lineinfile.py'
Nov 23 20:58:54 compute-1 sudo[205315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:55 compute-1 podman[205276]: 2025-11-23 20:58:55.003413125 +0000 UTC m=+0.053017887 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 23 20:58:55 compute-1 python3.9[205323]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:58:55 compute-1 sudo[205315]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:55 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:58:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:55.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:58:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:56 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:56 compute-1 sudo[205475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvhoqqczmuvrghjxmztnkxedbihjurtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931535.5047188-744-214030389561257/AnsiballZ_replace.py'
Nov 23 20:58:56 compute-1 sudo[205475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:56 compute-1 python3.9[205477]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:58:56 compute-1 sudo[205475]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:56 compute-1 ceph-mon[80135]: pgmap v517: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:58:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:56 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:56 compute-1 sudo[205627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfeqtordgzivyucavyullnbtrfenwljj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931536.682534-768-31256342165667/AnsiballZ_replace.py'
Nov 23 20:58:56 compute-1 sudo[205627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:56.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:57 compute-1 python3.9[205629]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:58:57 compute-1 sudo[205627]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205857 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 20:58:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:57 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:57 compute-1 sudo[205780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhbgkkydezpdrwllrywpfrylhwpfcenz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931537.578499-795-217820704637747/AnsiballZ_lineinfile.py'
Nov 23 20:58:57 compute-1 sudo[205780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:57.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:58 compute-1 python3.9[205782]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:58:58 compute-1 sudo[205780]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:58 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:58 compute-1 sudo[205932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfqfdjuilvkgesprvmkyolxvmnqtgxtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931538.239292-795-193004465953089/AnsiballZ_lineinfile.py'
Nov 23 20:58:58 compute-1 sudo[205932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:58 compute-1 ceph-mon[80135]: pgmap v518: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:58:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:58 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:58 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:58:58 compute-1 python3.9[205934]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:58:58 compute-1 sudo[205932]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:58.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:59 compute-1 sudo[206084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxezdstxzhawiqdvqiegcfprlkziapxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931538.8230944-795-12114649084944/AnsiballZ_lineinfile.py'
Nov 23 20:58:59 compute-1 sudo[206084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:59 compute-1 python3.9[206086]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:58:59 compute-1 sudo[206084]: pam_unix(sudo:session): session closed for user root
Nov 23 20:58:59 compute-1 ceph-mon[80135]: pgmap v519: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:58:59 compute-1 sudo[206237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkjkvsvoinuwfhtpmrlwpmgblruskake ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931539.418351-795-280677914100628/AnsiballZ_lineinfile.py'
Nov 23 20:58:59 compute-1 sudo[206237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:58:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:59 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04004780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:58:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:58:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:58:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:59.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:58:59 compute-1 python3.9[206239]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:58:59 compute-1 sudo[206237]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:00 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04004780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:00 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:00 compute-1 sudo[206389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niermztxdlttcmgsnnrylxmpjqdbjwwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931540.6701806-882-127193627803422/AnsiballZ_stat.py'
Nov 23 20:59:00 compute-1 sudo[206389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:59:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:00.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:59:01 compute-1 sudo[206391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:59:01 compute-1 sudo[206391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:59:01 compute-1 sudo[206391]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:01 compute-1 python3.9[206392]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:59:01 compute-1 sudo[206389]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:01 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:01 compute-1 sudo[206569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cionpytbjuwxkvprqhpjbpxcoihrbxta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931541.5926752-906-28151872190204/AnsiballZ_file.py'
Nov 23 20:59:01 compute-1 sudo[206569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:01.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:02 compute-1 ceph-mon[80135]: pgmap v520: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:59:02 compute-1 python3.9[206571]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:59:02 compute-1 sudo[206569]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:02 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04004780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:02 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:02 compute-1 sudo[206721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlnjlvhmbmrwfturukwfspccciwapgvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931542.5343204-933-254016952136566/AnsiballZ_file.py'
Nov 23 20:59:02 compute-1 sudo[206721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:02 compute-1 python3.9[206723]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:59:02 compute-1 sudo[206721]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 20:59:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:02.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 20:59:03 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:59:03 compute-1 sudo[206874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsynafslxolrrfapaakmomqsopdbmibi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931543.3849037-957-154945253278636/AnsiballZ_stat.py'
Nov 23 20:59:03 compute-1 sudo[206874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:03 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:03 compute-1 python3.9[206876]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:59:03 compute-1 sudo[206874]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:03.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:04 compute-1 ceph-mon[80135]: pgmap v521: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:59:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:59:04 compute-1 sudo[206969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmcyboiqwhwmdqatehqfvfhstegqhkym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931543.3849037-957-154945253278636/AnsiballZ_file.py'
Nov 23 20:59:04 compute-1 sudo[206969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:04 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:04 compute-1 podman[206926]: 2025-11-23 20:59:04.197920779 +0000 UTC m=+0.111567774 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 23 20:59:04 compute-1 python3.9[206974]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:59:04 compute-1 sudo[206969]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:04 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04004780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:04 compute-1 sudo[207130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmpfrggrtggceydiuhredplasqswesnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931544.4999738-957-89530213341870/AnsiballZ_stat.py'
Nov 23 20:59:04 compute-1 sudo[207130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:04 compute-1 python3.9[207132]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:59:04 compute-1 sudo[207130]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:05.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:05 compute-1 sudo[207208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moptjjimgosalnpmrkorwdaqdjmbmcfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931544.4999738-957-89530213341870/AnsiballZ_file.py'
Nov 23 20:59:05 compute-1 sudo[207208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:05 compute-1 python3.9[207210]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:59:05 compute-1 sudo[207208]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:05 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:05.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:06 compute-1 ceph-mon[80135]: pgmap v522: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:59:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:06 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec0041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:06 compute-1 sudo[207361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwgpsnbggnewkdlsbzfirtkkpsoojnwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931546.1905198-1026-84923314078405/AnsiballZ_file.py'
Nov 23 20:59:06 compute-1 sudo[207361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:06 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:06 compute-1 python3.9[207363]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:59:06 compute-1 sudo[207361]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:06 : epoch 69237539 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 20:59:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:59:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:07.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:59:07 compute-1 sudo[207513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvwpytyybzdgfaswutyhquhvibtmrapr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931547.150607-1052-113546213597675/AnsiballZ_stat.py'
Nov 23 20:59:07 compute-1 sudo[207513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:07 compute-1 python3.9[207515]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:59:07 compute-1 sudo[207513]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.746688) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931547746752, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1302, "num_deletes": 255, "total_data_size": 3217770, "memory_usage": 3273592, "flush_reason": "Manual Compaction"}
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931547760916, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 2105783, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18547, "largest_seqno": 19844, "table_properties": {"data_size": 2100198, "index_size": 2977, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11231, "raw_average_key_size": 18, "raw_value_size": 2089134, "raw_average_value_size": 3470, "num_data_blocks": 133, "num_entries": 602, "num_filter_entries": 602, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763931433, "oldest_key_time": 1763931433, "file_creation_time": 1763931547, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 14256 microseconds, and 6561 cpu microseconds.
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.760953) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 2105783 bytes OK
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.760971) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.762585) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.762602) EVENT_LOG_v1 {"time_micros": 1763931547762597, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.762618) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3211677, prev total WAL file size 3211677, number of live WAL files 2.
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.763444) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(2056KB)], [33(11MB)]
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931547763506, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 14541154, "oldest_snapshot_seqno": -1}
Nov 23 20:59:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:07 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04004780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 4995 keys, 14064059 bytes, temperature: kUnknown
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931547885107, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 14064059, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14029019, "index_size": 21426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 126852, "raw_average_key_size": 25, "raw_value_size": 13936817, "raw_average_value_size": 2790, "num_data_blocks": 881, "num_entries": 4995, "num_filter_entries": 4995, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763931547, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.885295) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 14064059 bytes
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.886498) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.5 rd, 115.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 11.9 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(13.6) write-amplify(6.7) OK, records in: 5519, records dropped: 524 output_compression: NoCompression
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.886513) EVENT_LOG_v1 {"time_micros": 1763931547886505, "job": 18, "event": "compaction_finished", "compaction_time_micros": 121652, "compaction_time_cpu_micros": 28566, "output_level": 6, "num_output_files": 1, "total_output_size": 14064059, "num_input_records": 5519, "num_output_records": 4995, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931547886890, "job": 18, "event": "table_file_deletion", "file_number": 35}
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931547888547, "job": 18, "event": "table_file_deletion", "file_number": 33}
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.763342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.888573) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.888577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.888579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.888581) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:59:07 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.888582) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 20:59:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:07.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:07 compute-1 sudo[207592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yywreccwvcjwycuqblbssbsdygbdcbru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931547.150607-1052-113546213597675/AnsiballZ_file.py'
Nov 23 20:59:07 compute-1 sudo[207592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:08 compute-1 ceph-mon[80135]: pgmap v523: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:59:08 compute-1 python3.9[207594]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:59:08 compute-1 sudo[207592]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:08 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:08 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:08 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:59:08 compute-1 sudo[207744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smlchavrucsxuhrwsaijusetpktskpha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931548.4659376-1086-26358797470133/AnsiballZ_stat.py'
Nov 23 20:59:08 compute-1 sudo[207744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:08 compute-1 python3.9[207746]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:59:08 compute-1 sudo[207744]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:09.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:09 compute-1 sudo[207822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqsklbqpawmrvxllqupjthgmzhzxvgdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931548.4659376-1086-26358797470133/AnsiballZ_file.py'
Nov 23 20:59:09 compute-1 sudo[207822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:09 compute-1 python3.9[207824]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:59:09 compute-1 sudo[207822]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:09 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:09.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:09 : epoch 69237539 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 20:59:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:09 : epoch 69237539 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:59:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:09 : epoch 69237539 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:59:10 compute-1 sudo[207975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmewepkgrfuyxijbpdjmfygwhnudmgal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931549.860129-1122-262649352656881/AnsiballZ_systemd.py'
Nov 23 20:59:10 compute-1 sudo[207975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:10 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04004780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:10 compute-1 ceph-mon[80135]: pgmap v524: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:59:10 compute-1 python3.9[207977]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:59:10 compute-1 systemd[1]: Reloading.
Nov 23 20:59:10 compute-1 systemd-sysv-generator[208006]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:59:10 compute-1 systemd-rc-local-generator[208002]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:59:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:10 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:10 compute-1 sudo[207975]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:11.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:11 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 20:59:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:11.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 20:59:11 compute-1 sudo[208166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awfuqkgrrjayscthnsfqwrjtpnndwjmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931551.5773306-1146-267713962407993/AnsiballZ_stat.py'
Nov 23 20:59:11 compute-1 sudo[208166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:12 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:12 compute-1 python3.9[208168]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:59:12 compute-1 sudo[208166]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:12 compute-1 ceph-mon[80135]: pgmap v525: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 20:59:12 compute-1 sudo[208244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icgkydofyohezxzsjuzlfymaxzocwnss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931551.5773306-1146-267713962407993/AnsiballZ_file.py'
Nov 23 20:59:12 compute-1 sudo[208244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:12 compute-1 python3.9[208246]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:59:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:12 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04004780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:12 compute-1 sudo[208244]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:13.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:13 : epoch 69237539 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 20:59:13 compute-1 sudo[208396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwlvcmunudciveiqxparsbxwhxsumjgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931553.016747-1182-85190995913152/AnsiballZ_stat.py'
Nov 23 20:59:13 compute-1 sudo[208396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:13 compute-1 python3.9[208398]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:59:13 compute-1 sudo[208396]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:13 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:59:13 compute-1 sudo[208475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrjrsuijqzaojtbbyetrzemwfacndisl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931553.016747-1182-85190995913152/AnsiballZ_file.py'
Nov 23 20:59:13 compute-1 sudo[208475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:13 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:13 compute-1 python3.9[208477]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:59:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:13.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:13 compute-1 sudo[208475]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:14 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:14 compute-1 ceph-mon[80135]: pgmap v526: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 20:59:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:14 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:14 compute-1 sudo[208627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufpmgxpursdunceclmcezjsvqjxdxfkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931554.394776-1218-176186352939500/AnsiballZ_systemd.py'
Nov 23 20:59:14 compute-1 sudo[208627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:14 compute-1 python3.9[208629]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:59:15 compute-1 systemd[1]: Reloading.
Nov 23 20:59:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:15.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:15 compute-1 systemd-sysv-generator[208660]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:59:15 compute-1 systemd-rc-local-generator[208656]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:59:15 compute-1 systemd[1]: Starting Create netns directory...
Nov 23 20:59:15 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 23 20:59:15 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 23 20:59:15 compute-1 systemd[1]: Finished Create netns directory.
Nov 23 20:59:15 compute-1 sudo[208627]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:15 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04004780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:15.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:16 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:16 compute-1 sudo[208821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otuhupcrlbfjaywqhtsxdiivooildcut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931556.059658-1248-24022034934927/AnsiballZ_file.py'
Nov 23 20:59:16 compute-1 sudo[208821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:16 compute-1 ceph-mon[80135]: pgmap v527: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 20:59:16 compute-1 python3.9[208823]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:59:16 compute-1 sudo[208821]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:16 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af80008d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:17.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:17 compute-1 sudo[208973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysqdyiqgqwinazirysghjbrohidsqefw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931556.895982-1273-113789735648345/AnsiballZ_stat.py'
Nov 23 20:59:17 compute-1 sudo[208973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:17 compute-1 python3.9[208975]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:59:17 compute-1 sudo[208973]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:17 compute-1 sudo[209097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkewzqetkemqkewsipaoibxjmprapmek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931556.895982-1273-113789735648345/AnsiballZ_copy.py'
Nov 23 20:59:17 compute-1 sudo[209097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:17 compute-1 python3.9[209099]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931556.895982-1273-113789735648345/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:59:17 compute-1 sudo[209097]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:17 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:17.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:18 compute-1 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 23 20:59:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:18 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04004780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:18 compute-1 ceph-mon[80135]: pgmap v528: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:59:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:59:18 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:59:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:18 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ad8001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 20:59:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:19.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 20:59:19 compute-1 sudo[209251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kelpkmkfxdjczuxnpuziuvwndwcibljr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931558.8016303-1324-237607582099659/AnsiballZ_file.py'
Nov 23 20:59:19 compute-1 sudo[209251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:19 compute-1 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 23 20:59:19 compute-1 python3.9[209253]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 20:59:19 compute-1 sudo[209251]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:19 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205919 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 20:59:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:19.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:20 compute-1 sudo[209405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywbwdfsmybdkhwdekiukrmdpasipdhew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931559.7934384-1347-276148035246262/AnsiballZ_stat.py'
Nov 23 20:59:20 compute-1 sudo[209405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:20 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:20 compute-1 python3.9[209407]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:59:20 compute-1 sudo[209405]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:20 compute-1 ceph-mon[80135]: pgmap v529: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:59:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:20 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04004780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:20 compute-1 sudo[209528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxgwyhzcfmryieryqvammwokuxobwrwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931559.7934384-1347-276148035246262/AnsiballZ_copy.py'
Nov 23 20:59:20 compute-1 sudo[209528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:20 compute-1 python3.9[209530]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931559.7934384-1347-276148035246262/.source.json _original_basename=._w_91cm2 follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:59:20 compute-1 sudo[209528]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:21.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:21 compute-1 sudo[209555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:59:21 compute-1 sudo[209555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:59:21 compute-1 sudo[209555]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:21 compute-1 sudo[209706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpqluczsjwdhebrjqeipdzeaweceiexc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931561.3080912-1393-138847653319830/AnsiballZ_file.py'
Nov 23 20:59:21 compute-1 sudo[209706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:21 compute-1 python3.9[209708]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:59:21 compute-1 sudo[209706]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:21 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ad8001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:21.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:22 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:22 compute-1 sudo[209858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmhxjxwqwhagrremsvxjklumrqkvzros ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931562.1927361-1416-252973235961765/AnsiballZ_stat.py'
Nov 23 20:59:22 compute-1 sudo[209858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:22 compute-1 ceph-mon[80135]: pgmap v530: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 20:59:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:22 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:22 compute-1 sudo[209858]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:23.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:23 compute-1 sudo[209981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgxhjplvcefxqtxebcxcncotvhqjnusq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931562.1927361-1416-252973235961765/AnsiballZ_copy.py'
Nov 23 20:59:23 compute-1 sudo[209981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:23 compute-1 sudo[209981]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:23 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:59:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:23 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04004780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:23.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:24 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04004780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:24 compute-1 ceph-mon[80135]: pgmap v531: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 23 20:59:24 compute-1 sudo[210134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hplaljunacqlezvytnszvnnfpistphpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931564.1409636-1467-124331764081302/AnsiballZ_container_config_data.py'
Nov 23 20:59:24 compute-1 sudo[210134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:24 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:24 compute-1 python3.9[210136]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 23 20:59:24 compute-1 sudo[210134]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 20:59:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:25.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 20:59:25 compute-1 sudo[210296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbwklcxqcvjqoinsvzthnacbvxhomdkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931565.166324-1494-120060160270379/AnsiballZ_container_config_hash.py'
Nov 23 20:59:25 compute-1 sudo[210296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:25 compute-1 podman[210261]: 2025-11-23 20:59:25.601175048 +0000 UTC m=+0.060544291 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 20:59:25 compute-1 python3.9[210303]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 20:59:25 compute-1 sudo[210296]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:25 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:25.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:26 compute-1 kernel: ganesha.nfsd[209100]: segfault at 50 ip 00007f6bb4e3432e sp 00007f6b76ffc210 error 4 in libntirpc.so.5.8[7f6bb4e19000+2c000] likely on CPU 3 (core 0, socket 3)
Nov 23 20:59:26 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 20:59:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:26 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04004780 fd 38 proxy ignored for local
Nov 23 20:59:26 compute-1 systemd[1]: Started Process Core Dump (PID 210333/UID 0).
Nov 23 20:59:26 compute-1 ceph-mon[80135]: pgmap v532: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Nov 23 20:59:26 compute-1 sudo[210460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ievfzpkyemapnjrajvqrcbzwbqdjbnhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931566.3178997-1521-92788045427774/AnsiballZ_podman_container_info.py'
Nov 23 20:59:26 compute-1 sudo[210460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:27.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:27 compute-1 python3.9[210462]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 23 20:59:27 compute-1 systemd-coredump[210334]: Process 195040 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 59:
                                                    #0  0x00007f6bb4e3432e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 23 20:59:27 compute-1 sudo[210460]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:27 compute-1 systemd[1]: systemd-coredump@6-210333-0.service: Deactivated successfully.
Nov 23 20:59:27 compute-1 systemd[1]: systemd-coredump@6-210333-0.service: Consumed 1.035s CPU time.
Nov 23 20:59:27 compute-1 podman[210515]: 2025-11-23 20:59:27.339621888 +0000 UTC m=+0.026237506 container died 53986badd315b38d8b9fa281241deaae5f5b036f9383287bb4abe40b27adebd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 20:59:27 compute-1 systemd[1]: var-lib-containers-storage-overlay-4fff88ddf62e59bbaca93d42aba99bc0cdc0c8fa1af4ad77cb6d0566221c0570-merged.mount: Deactivated successfully.
Nov 23 20:59:27 compute-1 podman[210515]: 2025-11-23 20:59:27.432467339 +0000 UTC m=+0.119082947 container remove 53986badd315b38d8b9fa281241deaae5f5b036f9383287bb4abe40b27adebd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 20:59:27 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Main process exited, code=exited, status=139/n/a
Nov 23 20:59:27 compute-1 ceph-mon[80135]: pgmap v533: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:59:27 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Failed with result 'exit-code'.
Nov 23 20:59:27 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.403s CPU time.
Nov 23 20:59:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:27.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:28 compute-1 sudo[210564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 20:59:28 compute-1 sudo[210564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:59:28 compute-1 sudo[210564]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:28 compute-1 sudo[210616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 20:59:28 compute-1 sudo[210616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:59:28 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:59:28 compute-1 sudo[210756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krunwhkplwfemqfsotsxyhxkmtorbtyc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763931568.3732343-1560-216240945702200/AnsiballZ_edpm_container_manage.py'
Nov 23 20:59:28 compute-1 sudo[210756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:28 compute-1 sudo[210616]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:59:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:29.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:59:29 compute-1 python3[210758]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 20:59:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:29.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:30 compute-1 ceph-mon[80135]: pgmap v534: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:59:30 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:59:30 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 20:59:30 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:59:30 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:59:30 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 20:59:30 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 20:59:30 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 20:59:30 compute-1 podman[210785]: 2025-11-23 20:59:30.170437431 +0000 UTC m=+1.040954953 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 23 20:59:30 compute-1 podman[210844]: 2025-11-23 20:59:30.309710178 +0000 UTC m=+0.046798316 container create 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 20:59:30 compute-1 podman[210844]: 2025-11-23 20:59:30.288264194 +0000 UTC m=+0.025352342 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 23 20:59:30 compute-1 python3[210758]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 23 20:59:30 compute-1 sudo[210756]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 20:59:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:31.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 20:59:31 compute-1 sudo[211033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtcuvmtmjmiwotjozhcdylfeqtnvwmpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931570.9739482-1584-172377470024694/AnsiballZ_stat.py'
Nov 23 20:59:31 compute-1 sudo[211033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:31 compute-1 python3.9[211035]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:59:31 compute-1 sudo[211033]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:31 compute-1 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 23 20:59:31 compute-1 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 23 20:59:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 20:59:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:31.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 20:59:32 compute-1 ceph-mon[80135]: pgmap v535: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 23 20:59:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205932 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 20:59:32 compute-1 sudo[211190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuvvupttvikwuywxsarafrqmtmffwltg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931572.0270958-1611-72454799402674/AnsiballZ_file.py'
Nov 23 20:59:32 compute-1 sudo[211190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:32 compute-1 python3.9[211192]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:59:32 compute-1 sudo[211190]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:32 compute-1 sudo[211266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhigiifgesvqkeucidimogvnnsxwdicl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931572.0270958-1611-72454799402674/AnsiballZ_stat.py'
Nov 23 20:59:32 compute-1 sudo[211266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:32 compute-1 python3.9[211268]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:59:32 compute-1 sudo[211266]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:33.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:33 compute-1 sudo[211418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tughyvwbwspxkmsotavxskbounmrlvum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931572.9666107-1611-145048122540003/AnsiballZ_copy.py'
Nov 23 20:59:33 compute-1 sudo[211418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:33 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:59:33 compute-1 python3.9[211420]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763931572.9666107-1611-145048122540003/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:59:33 compute-1 sudo[211418]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:33.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:33 compute-1 sudo[211468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 20:59:33 compute-1 sudo[211468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:59:33 compute-1 sudo[211468]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:34 compute-1 sudo[211518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhcbcjtlbxzoggzonpjaikgdnfapstto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931572.9666107-1611-145048122540003/AnsiballZ_systemd.py'
Nov 23 20:59:34 compute-1 sudo[211518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:34 compute-1 ceph-mon[80135]: pgmap v536: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 20:59:34 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:59:34 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:59:34 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 20:59:34 compute-1 python3.9[211521]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 20:59:34 compute-1 systemd[1]: Reloading.
Nov 23 20:59:34 compute-1 systemd-rc-local-generator[211570]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:59:34 compute-1 systemd-sysv-generator[211573]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:59:34 compute-1 podman[211523]: 2025-11-23 20:59:34.408324179 +0000 UTC m=+0.081800591 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 23 20:59:34 compute-1 sudo[211518]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:34 compute-1 sudo[211657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylsdpdizyqdpgdnlbkdgtdpfpfkxggrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931572.9666107-1611-145048122540003/AnsiballZ_systemd.py'
Nov 23 20:59:34 compute-1 sudo[211657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:35.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:35 compute-1 python3.9[211659]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 20:59:35 compute-1 systemd[1]: Reloading.
Nov 23 20:59:35 compute-1 systemd-rc-local-generator[211689]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:59:35 compute-1 systemd-sysv-generator[211693]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:59:35 compute-1 systemd[1]: Starting multipathd container...
Nov 23 20:59:35 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:59:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7726ae459c391d9257c0002f58c608b0e80bc067b5fffa8d7a4f9296ae99102b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 23 20:59:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7726ae459c391d9257c0002f58c608b0e80bc067b5fffa8d7a4f9296ae99102b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 23 20:59:35 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755.
Nov 23 20:59:35 compute-1 podman[211699]: 2025-11-23 20:59:35.674807949 +0000 UTC m=+0.113999066 container init 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 23 20:59:35 compute-1 multipathd[211714]: + sudo -E kolla_set_configs
Nov 23 20:59:35 compute-1 podman[211699]: 2025-11-23 20:59:35.698172709 +0000 UTC m=+0.137363806 container start 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 23 20:59:35 compute-1 podman[211699]: multipathd
Nov 23 20:59:35 compute-1 systemd[1]: Started multipathd container.
Nov 23 20:59:35 compute-1 sudo[211720]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 23 20:59:35 compute-1 sudo[211720]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 23 20:59:35 compute-1 sudo[211720]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 23 20:59:35 compute-1 sudo[211657]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:35 compute-1 podman[211721]: 2025-11-23 20:59:35.764630953 +0000 UTC m=+0.056694946 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 23 20:59:35 compute-1 systemd[1]: 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755-2c67b0cf0757b69f.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 20:59:35 compute-1 systemd[1]: 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755-2c67b0cf0757b69f.service: Failed with result 'exit-code'.
Nov 23 20:59:35 compute-1 multipathd[211714]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 20:59:35 compute-1 multipathd[211714]: INFO:__main__:Validating config file
Nov 23 20:59:35 compute-1 multipathd[211714]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 20:59:35 compute-1 multipathd[211714]: INFO:__main__:Writing out command to execute
Nov 23 20:59:35 compute-1 sudo[211720]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:35 compute-1 multipathd[211714]: ++ cat /run_command
Nov 23 20:59:35 compute-1 multipathd[211714]: + CMD='/usr/sbin/multipathd -d'
Nov 23 20:59:35 compute-1 multipathd[211714]: + ARGS=
Nov 23 20:59:35 compute-1 multipathd[211714]: + sudo kolla_copy_cacerts
Nov 23 20:59:35 compute-1 sudo[211757]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 23 20:59:35 compute-1 sudo[211757]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 23 20:59:35 compute-1 sudo[211757]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 23 20:59:35 compute-1 sudo[211757]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:35 compute-1 multipathd[211714]: + [[ ! -n '' ]]
Nov 23 20:59:35 compute-1 multipathd[211714]: + . kolla_extend_start
Nov 23 20:59:35 compute-1 multipathd[211714]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 23 20:59:35 compute-1 multipathd[211714]: Running command: '/usr/sbin/multipathd -d'
Nov 23 20:59:35 compute-1 multipathd[211714]: + umask 0022
Nov 23 20:59:35 compute-1 multipathd[211714]: + exec /usr/sbin/multipathd -d
Nov 23 20:59:35 compute-1 multipathd[211714]: 3524.444443 | --------start up--------
Nov 23 20:59:35 compute-1 multipathd[211714]: 3524.444461 | read /etc/multipath.conf
Nov 23 20:59:35 compute-1 multipathd[211714]: 3524.449511 | path checkers start up
Nov 23 20:59:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:35.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:36 compute-1 ceph-mon[80135]: pgmap v537: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 20:59:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:59:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:37.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:59:37 compute-1 python3.9[211903]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 20:59:37 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Scheduled restart job, restart counter is at 7.
Nov 23 20:59:37 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 20:59:37 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.403s CPU time.
Nov 23 20:59:37 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 20:59:37 compute-1 sudo[212068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zndkvmddviajfmjohjvxfsqclakgzzmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931577.5779283-1719-184105407546894/AnsiballZ_command.py'
Nov 23 20:59:37 compute-1 sudo[212068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:37.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:38 compute-1 podman[212105]: 2025-11-23 20:59:38.026079142 +0000 UTC m=+0.042613182 container create 36fdc947acb0f74c6cb2dbe393a95acb88d4327855046b049a9b027d3568eb16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 20:59:38 compute-1 python3.9[212077]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 20:59:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ac0bfe19a9b8adea475ad895db306f3a67519182ff8422f85cbb367036cc4d/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 20:59:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ac0bfe19a9b8adea475ad895db306f3a67519182ff8422f85cbb367036cc4d/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 20:59:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ac0bfe19a9b8adea475ad895db306f3a67519182ff8422f85cbb367036cc4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 20:59:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ac0bfe19a9b8adea475ad895db306f3a67519182ff8422f85cbb367036cc4d/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.fuxuha-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 20:59:38 compute-1 podman[212105]: 2025-11-23 20:59:38.082911831 +0000 UTC m=+0.099445901 container init 36fdc947acb0f74c6cb2dbe393a95acb88d4327855046b049a9b027d3568eb16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 20:59:38 compute-1 podman[212105]: 2025-11-23 20:59:38.08964986 +0000 UTC m=+0.106183880 container start 36fdc947acb0f74c6cb2dbe393a95acb88d4327855046b049a9b027d3568eb16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 23 20:59:38 compute-1 bash[212105]: 36fdc947acb0f74c6cb2dbe393a95acb88d4327855046b049a9b027d3568eb16
Nov 23 20:59:38 compute-1 podman[212105]: 2025-11-23 20:59:38.004224592 +0000 UTC m=+0.020758652 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 20:59:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:38 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 20:59:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:38 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 20:59:38 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 20:59:38 compute-1 ceph-mon[80135]: pgmap v538: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:59:38 compute-1 sudo[212068]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:38 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 20:59:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:38 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 20:59:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:38 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 20:59:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:38 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 20:59:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:38 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 20:59:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:38 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 20:59:38 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:59:38 compute-1 sudo[212324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wybhwnturbczjddryydtmcjevbqdhekv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931578.5662189-1743-239494961699705/AnsiballZ_systemd.py'
Nov 23 20:59:38 compute-1 sudo[212324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:39.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:39 compute-1 python3.9[212326]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 20:59:39 compute-1 systemd[1]: Stopping multipathd container...
Nov 23 20:59:39 compute-1 multipathd[211714]: 3527.829224 | exit (signal)
Nov 23 20:59:39 compute-1 multipathd[211714]: 3527.829285 | --------shut down-------
Nov 23 20:59:39 compute-1 systemd[1]: libpod-8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755.scope: Deactivated successfully.
Nov 23 20:59:39 compute-1 podman[212330]: 2025-11-23 20:59:39.232664515 +0000 UTC m=+0.064137604 container died 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0)
Nov 23 20:59:39 compute-1 systemd[1]: 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755-2c67b0cf0757b69f.timer: Deactivated successfully.
Nov 23 20:59:39 compute-1 systemd[1]: Stopped /usr/bin/podman healthcheck run 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755.
Nov 23 20:59:39 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755-userdata-shm.mount: Deactivated successfully.
Nov 23 20:59:39 compute-1 systemd[1]: var-lib-containers-storage-overlay-7726ae459c391d9257c0002f58c608b0e80bc067b5fffa8d7a4f9296ae99102b-merged.mount: Deactivated successfully.
Nov 23 20:59:39 compute-1 podman[212330]: 2025-11-23 20:59:39.376585175 +0000 UTC m=+0.208058244 container cleanup 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 23 20:59:39 compute-1 podman[212330]: multipathd
Nov 23 20:59:39 compute-1 podman[212357]: multipathd
Nov 23 20:59:39 compute-1 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 23 20:59:39 compute-1 systemd[1]: Stopped multipathd container.
Nov 23 20:59:39 compute-1 systemd[1]: Starting multipathd container...
Nov 23 20:59:39 compute-1 systemd[1]: Started libcrun container.
Nov 23 20:59:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7726ae459c391d9257c0002f58c608b0e80bc067b5fffa8d7a4f9296ae99102b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 23 20:59:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7726ae459c391d9257c0002f58c608b0e80bc067b5fffa8d7a4f9296ae99102b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 23 20:59:39 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755.
Nov 23 20:59:39 compute-1 podman[212370]: 2025-11-23 20:59:39.56594286 +0000 UTC m=+0.090874543 container init 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 23 20:59:39 compute-1 multipathd[212383]: + sudo -E kolla_set_configs
Nov 23 20:59:39 compute-1 podman[212370]: 2025-11-23 20:59:39.585953061 +0000 UTC m=+0.110884744 container start 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 23 20:59:39 compute-1 sudo[212392]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 23 20:59:39 compute-1 sudo[212392]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 23 20:59:39 compute-1 sudo[212392]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 23 20:59:39 compute-1 podman[212370]: multipathd
Nov 23 20:59:39 compute-1 systemd[1]: Started multipathd container.
Nov 23 20:59:39 compute-1 sudo[212324]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:39 compute-1 multipathd[212383]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 20:59:39 compute-1 multipathd[212383]: INFO:__main__:Validating config file
Nov 23 20:59:39 compute-1 multipathd[212383]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 20:59:39 compute-1 multipathd[212383]: INFO:__main__:Writing out command to execute
Nov 23 20:59:39 compute-1 sudo[212392]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:39 compute-1 multipathd[212383]: ++ cat /run_command
Nov 23 20:59:39 compute-1 multipathd[212383]: + CMD='/usr/sbin/multipathd -d'
Nov 23 20:59:39 compute-1 multipathd[212383]: + ARGS=
Nov 23 20:59:39 compute-1 multipathd[212383]: + sudo kolla_copy_cacerts
Nov 23 20:59:39 compute-1 sudo[212413]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 23 20:59:39 compute-1 sudo[212413]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 23 20:59:39 compute-1 sudo[212413]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 23 20:59:39 compute-1 sudo[212413]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:39 compute-1 multipathd[212383]: Running command: '/usr/sbin/multipathd -d'
Nov 23 20:59:39 compute-1 multipathd[212383]: + [[ ! -n '' ]]
Nov 23 20:59:39 compute-1 multipathd[212383]: + . kolla_extend_start
Nov 23 20:59:39 compute-1 multipathd[212383]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 23 20:59:39 compute-1 multipathd[212383]: + umask 0022
Nov 23 20:59:39 compute-1 multipathd[212383]: + exec /usr/sbin/multipathd -d
Nov 23 20:59:39 compute-1 podman[212393]: 2025-11-23 20:59:39.657618714 +0000 UTC m=+0.062052769 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 20:59:39 compute-1 systemd[1]: 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755-2fc5d13b8cdc79bd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 20:59:39 compute-1 systemd[1]: 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755-2fc5d13b8cdc79bd.service: Failed with result 'exit-code'.
Nov 23 20:59:39 compute-1 multipathd[212383]: 3528.299903 | --------start up--------
Nov 23 20:59:39 compute-1 multipathd[212383]: 3528.299925 | read /etc/multipath.conf
Nov 23 20:59:39 compute-1 multipathd[212383]: 3528.304603 | path checkers start up
Nov 23 20:59:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:39.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:40 compute-1 ceph-mon[80135]: pgmap v539: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 20:59:40 compute-1 sudo[212573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyvwqophjqizfrcbjinjgllmcwatalon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931580.177218-1767-98541660000639/AnsiballZ_file.py'
Nov 23 20:59:40 compute-1 sudo[212573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:40 compute-1 python3.9[212575]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:59:40 compute-1 sudo[212573]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:41.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:41 compute-1 sudo[212600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 20:59:41 compute-1 sudo[212600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 20:59:41 compute-1 sudo[212600]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:41 compute-1 sudo[212751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjspyncyjddikxkicknkbazrqormltvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931581.4190502-1803-148760810819183/AnsiballZ_file.py'
Nov 23 20:59:41 compute-1 sudo[212751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:41 compute-1 python3.9[212753]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 20:59:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:41.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:41 compute-1 sudo[212751]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:42 compute-1 ceph-mon[80135]: pgmap v540: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:59:42 compute-1 sudo[212903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuewyeyjxlqxtdrjpfqibyfmxgpfnftv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931582.2696905-1827-128219151349206/AnsiballZ_modprobe.py'
Nov 23 20:59:42 compute-1 sudo[212903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:42 compute-1 python3.9[212905]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 23 20:59:42 compute-1 kernel: Key type psk registered
Nov 23 20:59:42 compute-1 sudo[212903]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:43.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:43 compute-1 sudo[213067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbrqundsnbhfnxxvhyldzkxdarimirsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931583.2132385-1851-242402505142477/AnsiballZ_stat.py'
Nov 23 20:59:43 compute-1 sudo[213067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:43 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:59:43 compute-1 python3.9[213069]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 20:59:43 compute-1 sudo[213067]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:43.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:44 compute-1 sudo[213190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emgrdlrovujxcenomfjijtquhzymlqar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931583.2132385-1851-242402505142477/AnsiballZ_copy.py'
Nov 23 20:59:44 compute-1 sudo[213190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:44 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 20:59:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:44 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 20:59:44 compute-1 ceph-mon[80135]: pgmap v541: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 20:59:44 compute-1 python3.9[213192]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931583.2132385-1851-242402505142477/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:59:44 compute-1 sudo[213190]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:59:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:45.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:59:45 compute-1 sudo[213342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msafoqpwycrqbqtixwdwowhaamrwwzgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931584.8732285-1899-132676089825821/AnsiballZ_lineinfile.py'
Nov 23 20:59:45 compute-1 sudo[213342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:45 compute-1 python3.9[213344]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:59:45 compute-1 sudo[213342]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:45.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:46 compute-1 sudo[213495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijissrtnqeddhztdapsicnzoycewkvgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931585.7621589-1923-134923570650021/AnsiballZ_systemd.py'
Nov 23 20:59:46 compute-1 sudo[213495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:46 compute-1 ceph-mon[80135]: pgmap v542: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 20:59:46 compute-1 python3.9[213497]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 20:59:46 compute-1 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 23 20:59:46 compute-1 systemd[1]: Stopped Load Kernel Modules.
Nov 23 20:59:46 compute-1 systemd[1]: Stopping Load Kernel Modules...
Nov 23 20:59:46 compute-1 systemd[1]: Starting Load Kernel Modules...
Nov 23 20:59:46 compute-1 systemd[1]: Finished Load Kernel Modules.
Nov 23 20:59:46 compute-1 sudo[213495]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:47.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:47 compute-1 sudo[213651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhunlinddrukmetcsjfrwqjqfocrlhui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931586.9655998-1947-161953306089450/AnsiballZ_dnf.py'
Nov 23 20:59:47 compute-1 sudo[213651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:47 compute-1 python3.9[213653]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 20:59:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:59:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:47.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:59:48 compute-1 ceph-mon[80135]: pgmap v543: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 20:59:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 20:59:48 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:59:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:49.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:49 compute-1 systemd[1]: Reloading.
Nov 23 20:59:49 compute-1 systemd-rc-local-generator[213688]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:59:49 compute-1 systemd-sysv-generator[213691]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:59:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:59:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:49.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:59:50 compute-1 systemd[1]: Reloading.
Nov 23 20:59:50 compute-1 systemd-rc-local-generator[213722]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:59:50 compute-1 systemd-sysv-generator[213726]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 20:59:50 compute-1 ceph-mon[80135]: pgmap v544: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 20:59:50 compute-1 systemd-logind[793]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 23 20:59:50 compute-1 systemd-logind[793]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 23 20:59:50 compute-1 lvm[213779]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 20:59:50 compute-1 lvm[213779]: VG ceph_vg0 finished
Nov 23 20:59:50 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 20:59:50 compute-1 systemd[1]: Starting man-db-cache-update.service...
Nov 23 20:59:50 compute-1 systemd[1]: Reloading.
Nov 23 20:59:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:50 compute-1 systemd-sysv-generator[213836]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:59:50 compute-1 systemd-rc-local-generator[213833]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:59:51 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 20:59:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:51.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:59:51.056 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 20:59:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:59:51.056 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 20:59:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 20:59:51.056 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 20:59:51 compute-1 sudo[213651]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205951 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 20:59:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:51 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:59:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:51.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:59:52 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 20:59:52 compute-1 systemd[1]: Finished man-db-cache-update.service.
Nov 23 20:59:52 compute-1 systemd[1]: man-db-cache-update.service: Consumed 1.541s CPU time.
Nov 23 20:59:52 compute-1 systemd[1]: run-r66adbe3ddcc74f7a92d86be4ad7bc57c.service: Deactivated successfully.
Nov 23 20:59:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:52 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:52 compute-1 sudo[215121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfnhefkpcexqedmmhdcrdewsnffhjzxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931591.975427-1971-167930092492119/AnsiballZ_systemd_service.py'
Nov 23 20:59:52 compute-1 sudo[215121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:52 compute-1 ceph-mon[80135]: pgmap v545: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 20:59:52 compute-1 python3.9[215123]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 20:59:52 compute-1 systemd[1]: Stopping Open-iSCSI...
Nov 23 20:59:52 compute-1 iscsid[203042]: iscsid shutting down.
Nov 23 20:59:52 compute-1 systemd[1]: iscsid.service: Deactivated successfully.
Nov 23 20:59:52 compute-1 systemd[1]: Stopped Open-iSCSI.
Nov 23 20:59:52 compute-1 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 23 20:59:52 compute-1 systemd[1]: Starting Open-iSCSI...
Nov 23 20:59:52 compute-1 systemd[1]: Started Open-iSCSI.
Nov 23 20:59:52 compute-1 sudo[215121]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:52 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:59:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:53.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:59:53 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:59:53 compute-1 python3.9[215277]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 20:59:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:53 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:59:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:53.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:59:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205954 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 20:59:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:54 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:54 compute-1 ceph-mon[80135]: pgmap v546: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 20:59:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:54 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75900016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:54 compute-1 sudo[215432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evofvinvvzxzawuhvlccphxbuonvfjoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931594.4248261-2023-71747231166958/AnsiballZ_file.py'
Nov 23 20:59:54 compute-1 sudo[215432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:54 compute-1 python3.9[215434]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 20:59:54 compute-1 sudo[215432]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:55.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:55 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b40021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:55 compute-1 sudo[215596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kijyewnzjdmazvkknbxlqjeaqdqmdnqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931595.5836995-2057-95377168339058/AnsiballZ_systemd_service.py'
Nov 23 20:59:55 compute-1 sudo[215596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 20:59:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 20:59:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:55.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 20:59:56 compute-1 podman[215559]: 2025-11-23 20:59:56.018798584 +0000 UTC m=+0.102238444 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 23 20:59:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:56 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:56 compute-1 python3.9[215603]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 20:59:56 compute-1 systemd[1]: Reloading.
Nov 23 20:59:56 compute-1 ceph-mon[80135]: pgmap v547: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 20:59:56 compute-1 systemd-rc-local-generator[215634]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 20:59:56 compute-1 systemd-sysv-generator[215639]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 20:59:56 compute-1 sudo[215596]: pam_unix(sudo:session): session closed for user root
Nov 23 20:59:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:56 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b40021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:57.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:57 compute-1 sshd-session[215713]: Invalid user solv from 161.35.133.66 port 35508
Nov 23 20:59:57 compute-1 sshd-session[215713]: Connection closed by invalid user solv 161.35.133.66 port 35508 [preauth]
Nov 23 20:59:57 compute-1 python3.9[215794]: ansible-ansible.builtin.service_facts Invoked
Nov 23 20:59:57 compute-1 network[215812]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 20:59:57 compute-1 network[215813]: 'network-scripts' will be removed from distribution in near future.
Nov 23 20:59:57 compute-1 network[215814]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 20:59:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:57 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:57.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:58 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75900016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:58 compute-1 ceph-mon[80135]: pgmap v548: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 20:59:58 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 20:59:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:58 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b40021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:59.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 20:59:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:59 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b40021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 20:59:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 20:59:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 20:59:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:59.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:00 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:00 compute-1 sshd-session[215887]: Invalid user mpi from 213.209.143.48 port 53866
Nov 23 21:00:00 compute-1 ceph-mon[80135]: pgmap v549: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 21:00:00 compute-1 ceph-mon[80135]: overall HEALTH_OK
Nov 23 21:00:00 compute-1 sshd-session[215887]: Connection closed by invalid user mpi 213.209.143.48 port 53866 [preauth]
Nov 23 21:00:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:00 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75900016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:00 : epoch 692375ba : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 21:00:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:01.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:01 compute-1 sudo[215918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:00:01 compute-1 sudo[215918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:00:01 compute-1 sudo[215918]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:01 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b40021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:00:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:01.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:00:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:02 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b40021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:02 compute-1 ceph-mon[80135]: pgmap v550: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 511 B/s wr, 2 op/s
Nov 23 21:00:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:02 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:02 compute-1 sshd-session[215991]: Invalid user validate from 92.118.39.92 port 60134
Nov 23 21:00:03 compute-1 sshd-session[215991]: Connection closed by invalid user validate 92.118.39.92 port 60134 [preauth]
Nov 23 21:00:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:00:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:03.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:00:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:00:03 compute-1 sudo[216119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yajdivmrdizamrjaomabxkpyoovdtbmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931603.4003139-2113-214132351392418/AnsiballZ_systemd_service.py'
Nov 23 21:00:03 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:00:03 compute-1 sudo[216119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:03 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:03 : epoch 692375ba : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 21:00:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:03 : epoch 692375ba : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 21:00:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:03.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:03 compute-1 python3.9[216121]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 21:00:04 compute-1 sudo[216119]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:04 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b40021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:04 compute-1 sudo[216272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueonxmqhcncjnbxcsmjfcamyfywtjrjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931604.2033567-2113-236292644030986/AnsiballZ_systemd_service.py'
Nov 23 21:00:04 compute-1 sudo[216272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:04 compute-1 ceph-mon[80135]: pgmap v551: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 21:00:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:04 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:04 compute-1 python3.9[216274]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 21:00:04 compute-1 sudo[216272]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:04 compute-1 podman[216276]: 2025-11-23 21:00:04.85355712 +0000 UTC m=+0.079891511 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 23 21:00:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:05.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:05 compute-1 sudo[216451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chonvpvfbvdrkdmbdbequueugopzfwja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931604.9226334-2113-12319228037472/AnsiballZ_systemd_service.py'
Nov 23 21:00:05 compute-1 sudo[216451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:05 compute-1 python3.9[216453]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 21:00:05 compute-1 sudo[216451]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:05 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:05.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:06 compute-1 sudo[216605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orgbmmmfdrcdkrywxeppjjxagbsyedts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931605.7057014-2113-60161322349449/AnsiballZ_systemd_service.py'
Nov 23 21:00:06 compute-1 sudo[216605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:06 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:06 compute-1 python3.9[216607]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 21:00:06 compute-1 sudo[216605]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:06 compute-1 ceph-mon[80135]: pgmap v552: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 21:00:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:06 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b40021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:06 compute-1 sudo[216758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmubgehefgeczsjclaklapbabsefsesv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931606.4898503-2113-75976151109010/AnsiballZ_systemd_service.py'
Nov 23 21:00:06 compute-1 sudo[216758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:06 : epoch 692375ba : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 21:00:06 compute-1 python3.9[216760]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 21:00:07 compute-1 sudo[216758]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:07.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:07 compute-1 sudo[216912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onhqbqqupkqqzjjmokuaubqfavcopblr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931607.230682-2113-93014763799849/AnsiballZ_systemd_service.py'
Nov 23 21:00:07 compute-1 sudo[216912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:07 compute-1 python3.9[216914]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 21:00:07 compute-1 sudo[216912]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:07 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:07.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:08 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:08 compute-1 sudo[217065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghmttirtknufcnyhsmbhesasouqhczfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931607.9592545-2113-278335039088511/AnsiballZ_systemd_service.py'
Nov 23 21:00:08 compute-1 sudo[217065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:08 compute-1 python3.9[217067]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 21:00:08 compute-1 sudo[217065]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:08 compute-1 ceph-mon[80135]: pgmap v553: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 21:00:08 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:00:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:08 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:08 compute-1 sudo[217218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzcbrwzyfsycdpsoiqcfxtyvrhcvevre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931608.6261635-2113-54686078453721/AnsiballZ_systemd_service.py'
Nov 23 21:00:08 compute-1 sudo[217218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:09.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:09 compute-1 python3.9[217220]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 21:00:09 compute-1 sudo[217218]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:09 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:09.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:10 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:10 compute-1 ceph-mon[80135]: pgmap v554: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 21:00:10 compute-1 podman[217247]: 2025-11-23 21:00:10.63761176 +0000 UTC m=+0.049501715 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 21:00:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:10 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594003870 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:11.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:11 compute-1 sudo[217393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlcntddkcbmgvqnesrzubszbeertjkrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931611.32661-2290-42988771414880/AnsiballZ_file.py'
Nov 23 21:00:11 compute-1 ceph-mon[80135]: pgmap v555: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 21:00:11 compute-1 sudo[217393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:11 compute-1 python3.9[217395]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 21:00:11 compute-1 sudo[217393]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:11 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594003870 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:00:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:11.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:00:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:12 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594003870 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:12 compute-1 sudo[217545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhwxlktuwcnlkwqshyzvleagspuujuqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931611.9411132-2290-143987008938967/AnsiballZ_file.py'
Nov 23 21:00:12 compute-1 sudo[217545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:12 compute-1 python3.9[217547]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 21:00:12 compute-1 sudo[217545]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:12 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594003870 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:12 compute-1 sudo[217697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcczcvnccvybvznwxhbdcetbuordtqmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931612.5699813-2290-52826478416362/AnsiballZ_file.py'
Nov 23 21:00:12 compute-1 sudo[217697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:12 compute-1 python3.9[217699]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 21:00:12 compute-1 sudo[217697]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:00:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:13.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:00:13 compute-1 sudo[217849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbkzrrcgxxtdyjrdtzzkhhrjqsttrccl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931613.1176126-2290-228049980468889/AnsiballZ_file.py'
Nov 23 21:00:13 compute-1 sudo[217849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:13 compute-1 python3.9[217851]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 21:00:13 compute-1 sudo[217849]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:13 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:00:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210013 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 21:00:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:13 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:13 compute-1 sudo[218002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qopkvleumpiogxwpcvtpjozomgkjhjyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931613.6933455-2290-248097217035855/AnsiballZ_file.py'
Nov 23 21:00:13 compute-1 sudo[218002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:14.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:14 compute-1 ceph-mon[80135]: pgmap v556: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 21:00:14 compute-1 python3.9[218004]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 21:00:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:14 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:14 compute-1 sudo[218002]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:14 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:14 compute-1 sudo[218154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zixlkfvblijdymnpxlrfbiwzqwnojlnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931614.3504446-2290-178215066906371/AnsiballZ_file.py'
Nov 23 21:00:14 compute-1 sudo[218154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:15 compute-1 python3.9[218156]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 21:00:15 compute-1 sudo[218154]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:15.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:15 compute-1 sudo[218307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccnulfvgzlicupukrqhwvvdufyexeidb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931615.2357554-2290-46743335892523/AnsiballZ_file.py'
Nov 23 21:00:15 compute-1 sudo[218307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:15 compute-1 python3.9[218309]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 21:00:15 compute-1 sudo[218307]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:15 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:16.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:16 compute-1 ceph-mon[80135]: pgmap v557: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 21:00:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:16 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:16 compute-1 sudo[218459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvqbeegxyjgkyaiuavqefifscfcjalpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931615.962619-2290-71043730650693/AnsiballZ_file.py'
Nov 23 21:00:16 compute-1 sudo[218459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:16 compute-1 python3.9[218461]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 21:00:16 compute-1 sudo[218459]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:16 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:00:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:17.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:00:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:17 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:18.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:18 compute-1 sudo[218612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utrapjkpvdzifilmbeajtlgcpcgycnsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931617.7147832-2461-162954088077171/AnsiballZ_file.py'
Nov 23 21:00:18 compute-1 sudo[218612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:18 compute-1 ceph-mon[80135]: pgmap v558: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 21:00:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:18 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:18 compute-1 python3.9[218614]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 21:00:18 compute-1 sudo[218612]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:18 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:00:18 compute-1 sudo[218764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vahyciespxwjvtfqgjuusvgnawmatjxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931618.3861575-2461-69966060128050/AnsiballZ_file.py'
Nov 23 21:00:18 compute-1 sudo[218764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:18 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:18 compute-1 python3.9[218766]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 21:00:18 compute-1 sudo[218764]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:19.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:19 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:00:19 compute-1 sudo[218916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksnulvhlthuzildllpzihvvlatwbpytn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931619.0349035-2461-154568227756338/AnsiballZ_file.py'
Nov 23 21:00:19 compute-1 sudo[218916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:19 compute-1 python3.9[218918]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 21:00:19 compute-1 sudo[218916]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:19 compute-1 sudo[219069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwaoariwparwypmttcwltdfhjvamefit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931619.5758307-2461-138981220233842/AnsiballZ_file.py'
Nov 23 21:00:19 compute-1 sudo[219069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:19 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594003870 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:19 compute-1 python3.9[219071]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 21:00:19 compute-1 sudo[219069]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:20.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:20 compute-1 ceph-mon[80135]: pgmap v559: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 21:00:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:20 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594003870 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:20 compute-1 sudo[219221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuaroddspvsxlttpoqjxghuvzcwmbxsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931620.1294339-2461-196658412329785/AnsiballZ_file.py'
Nov 23 21:00:20 compute-1 sudo[219221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:20 compute-1 python3.9[219223]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 21:00:20 compute-1 sudo[219221]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:20 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:20 compute-1 sudo[219373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhpxxpnclwkrlljvsctylpyoioirhsav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931620.6772327-2461-171236367435681/AnsiballZ_file.py'
Nov 23 21:00:20 compute-1 sudo[219373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:00:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:21.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:00:21 compute-1 python3.9[219375]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 21:00:21 compute-1 sudo[219373]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:21 compute-1 sudo[219499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:00:21 compute-1 sudo[219499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:00:21 compute-1 sudo[219499]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:21 compute-1 sudo[219549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylwgmcjmkzdwztolicxgputcypufaqqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931621.2500088-2461-133490279322419/AnsiballZ_file.py'
Nov 23 21:00:21 compute-1 sudo[219549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:21 compute-1 python3.9[219552]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 21:00:21 compute-1 sudo[219549]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:21 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:22.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:22 compute-1 sudo[219705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihzouolyftsvvjpovcrwdpcphtuqwcjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931621.7701797-2461-117856533768690/AnsiballZ_file.py'
Nov 23 21:00:22 compute-1 sudo[219705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:22 compute-1 ceph-mon[80135]: pgmap v560: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 23 21:00:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:22 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594003870 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:22 compute-1 python3.9[219707]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 21:00:22 compute-1 sudo[219705]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:22 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7588000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:23.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:23 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:00:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:23 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:24.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:24 compute-1 ceph-mon[80135]: pgmap v561: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 21:00:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:24 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:24 compute-1 sudo[219858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnqniiuwgpnrklfiaulbxzcohtrekxpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931624.2839398-2635-131671700305970/AnsiballZ_command.py'
Nov 23 21:00:24 compute-1 sudo[219858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:24 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594003870 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:24 compute-1 python3.9[219860]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 21:00:24 compute-1 sudo[219858]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:25.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:25 compute-1 python3.9[220013]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 23 21:00:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:25 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75880016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:26.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:26 compute-1 ceph-mon[80135]: pgmap v562: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 23 21:00:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:26 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:26 compute-1 sudo[220174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irlcxhenhxnvjouxyjnhxoypeyozjfke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931626.1888044-2689-914646780758/AnsiballZ_systemd_service.py'
Nov 23 21:00:26 compute-1 sudo[220174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:26 compute-1 podman[220137]: 2025-11-23 21:00:26.489697789 +0000 UTC m=+0.069636029 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 23 21:00:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:26 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:26 compute-1 python3.9[220182]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 21:00:26 compute-1 systemd[1]: Reloading.
Nov 23 21:00:26 compute-1 systemd-rc-local-generator[220211]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 21:00:26 compute-1 systemd-sysv-generator[220215]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 21:00:27 compute-1 sudo[220174]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:27.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:27 compute-1 sudo[220372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlsiijedrlcjycwiruisoxfkyztqiubh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931627.5210843-2713-70408196293333/AnsiballZ_command.py'
Nov 23 21:00:27 compute-1 sudo[220372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:27 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594003870 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:27 compute-1 python3.9[220374]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 21:00:28 compute-1 sudo[220372]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:28.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:28 compute-1 ceph-mon[80135]: pgmap v563: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:00:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:28 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75880016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:28 compute-1 sudo[220525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svmslovmcopngzeswbpvoazwdlfbkokg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931628.1424394-2713-271193719702918/AnsiballZ_command.py'
Nov 23 21:00:28 compute-1 sudo[220525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:28 compute-1 python3.9[220527]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 21:00:28 compute-1 sudo[220525]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:28 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:00:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:28 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:29 compute-1 sudo[220678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkbarstlnvjkdnovlykxxdszifbfkvtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931628.7362232-2713-53041534704338/AnsiballZ_command.py'
Nov 23 21:00:29 compute-1 sudo[220678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:29.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:29 compute-1 python3.9[220680]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 21:00:29 compute-1 sudo[220678]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:29 compute-1 sudo[220832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkjihwsljbhbrpxzfypwripafvwvumhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931629.3858314-2713-84444668846604/AnsiballZ_command.py'
Nov 23 21:00:29 compute-1 sudo[220832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:29 compute-1 python3.9[220834]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 21:00:29 compute-1 sudo[220832]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:29 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:30.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:30 compute-1 ceph-mon[80135]: pgmap v564: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.193485) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931630193536, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1000, "num_deletes": 251, "total_data_size": 2266422, "memory_usage": 2313456, "flush_reason": "Manual Compaction"}
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Nov 23 21:00:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:30 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594004190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931630206976, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 1496311, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19849, "largest_seqno": 20844, "table_properties": {"data_size": 1491820, "index_size": 2143, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9799, "raw_average_key_size": 19, "raw_value_size": 1482875, "raw_average_value_size": 2953, "num_data_blocks": 96, "num_entries": 502, "num_filter_entries": 502, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763931548, "oldest_key_time": 1763931548, "file_creation_time": 1763931630, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 13521 microseconds, and 4160 cpu microseconds.
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.207012) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 1496311 bytes OK
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.207028) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.208268) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.208283) EVENT_LOG_v1 {"time_micros": 1763931630208278, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.208301) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 2261523, prev total WAL file size 2261523, number of live WAL files 2.
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.208955) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(1461KB)], [36(13MB)]
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931630209010, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 15560370, "oldest_snapshot_seqno": -1}
Nov 23 21:00:30 compute-1 sudo[220985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcnvpjtdgjphmdgqbzerlvesyoqgqxoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931630.0037594-2713-114997509724100/AnsiballZ_command.py'
Nov 23 21:00:30 compute-1 sudo[220985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 4981 keys, 13378883 bytes, temperature: kUnknown
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931630311239, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 13378883, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13344529, "index_size": 20804, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 127139, "raw_average_key_size": 25, "raw_value_size": 13252980, "raw_average_value_size": 2660, "num_data_blocks": 854, "num_entries": 4981, "num_filter_entries": 4981, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763931630, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.311447) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 13378883 bytes
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.322130) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.1 rd, 130.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 13.4 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(19.3) write-amplify(8.9) OK, records in: 5497, records dropped: 516 output_compression: NoCompression
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.322172) EVENT_LOG_v1 {"time_micros": 1763931630322156, "job": 20, "event": "compaction_finished", "compaction_time_micros": 102291, "compaction_time_cpu_micros": 26155, "output_level": 6, "num_output_files": 1, "total_output_size": 13378883, "num_input_records": 5497, "num_output_records": 4981, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931630322637, "job": 20, "event": "table_file_deletion", "file_number": 38}
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931630325281, "job": 20, "event": "table_file_deletion", "file_number": 36}
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.208892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.325332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.325337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.325338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.325340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:00:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.325341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:00:30 compute-1 python3.9[220987]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 21:00:30 compute-1 sudo[220985]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:30 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75880016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:30 compute-1 sudo[221138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzehknnbzgdyyoqhlwweljupwszeqrrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931630.627245-2713-258757878178387/AnsiballZ_command.py'
Nov 23 21:00:30 compute-1 sudo[221138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:31 compute-1 python3.9[221140]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 21:00:31 compute-1 sudo[221138]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:31.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:31 compute-1 sudo[221291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtwaalibyxynosxdetdgkscxgbabavet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931631.200239-2713-154688222462566/AnsiballZ_command.py'
Nov 23 21:00:31 compute-1 sudo[221291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:31 compute-1 python3.9[221293]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 21:00:31 compute-1 sudo[221291]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:31 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75880016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:32.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:32 compute-1 sudo[221445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btgrfvtaxtvgdnodehmtfislxcbrqtpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931631.8572335-2713-35574629769152/AnsiballZ_command.py'
Nov 23 21:00:32 compute-1 sudo[221445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:32 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:32 compute-1 ceph-mon[80135]: pgmap v565: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 21:00:32 compute-1 python3.9[221447]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 21:00:32 compute-1 sudo[221445]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:32 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594004190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:00:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:33.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:00:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:00:33 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:00:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:33 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75880016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:34.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:34 compute-1 sudo[221474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:00:34 compute-1 sudo[221474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:00:34 compute-1 sudo[221474]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:34 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:34 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:34 compute-1 sudo[221499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Nov 23 21:00:34 compute-1 sudo[221499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:00:34 compute-1 sudo[221499]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:34 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:34 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:34 compute-1 ceph-mon[80135]: pgmap v566: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:00:35 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 21:00:35 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 9173 writes, 35K keys, 9173 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9173 writes, 2093 syncs, 4.38 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 845 writes, 1350 keys, 845 commit groups, 1.0 writes per commit group, ingest: 0.45 MB, 0.00 MB/s
                                           Interval WAL: 845 writes, 399 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5580590769b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5580590769b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5580590769b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 23 21:00:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:35.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:35 compute-1 sudo[221623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:00:35 compute-1 sudo[221623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:00:35 compute-1 sudo[221623]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:35 compute-1 sudo[221728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmkvfsiqsoxkuvvzjgblwihlcabkuqwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931634.9212906-2920-249712235398798/AnsiballZ_file.py'
Nov 23 21:00:35 compute-1 sudo[221675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 21:00:35 compute-1 sudo[221728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:35 compute-1 sudo[221675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:00:35 compute-1 podman[221667]: 2025-11-23 21:00:35.236347426 +0000 UTC m=+0.077575920 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 21:00:35 compute-1 python3.9[221739]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 21:00:35 compute-1 sudo[221728]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:35 compute-1 sudo[221675]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:35 compute-1 sudo[221930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boppaqtivbfoejrbnxrclhcaqyttjyjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931635.5882025-2920-31847418411713/AnsiballZ_file.py'
Nov 23 21:00:35 compute-1 sudo[221930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:35 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:35 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:00:35 compute-1 ceph-mon[80135]: pgmap v567: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 21:00:35 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:00:35 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:00:35 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:00:35 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:00:35 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 21:00:35 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:00:35 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:00:35 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 21:00:35 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 21:00:35 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:00:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:36.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:36 compute-1 python3.9[221932]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 21:00:36 compute-1 sudo[221930]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:36 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75880032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:36 compute-1 sudo[222082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thkchddgqahqzopbvcoumvopslrtasud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931636.25074-2920-14970635913566/AnsiballZ_file.py'
Nov 23 21:00:36 compute-1 sudo[222082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:36 compute-1 python3.9[222084]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 21:00:36 compute-1 sudo[222082]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:36 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:37.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:37 compute-1 sudo[222235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdjhmpwtjzhrrvofogukhzrxeyqksffm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931637.4569118-2986-239300419490023/AnsiballZ_file.py'
Nov 23 21:00:37 compute-1 sudo[222235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:37 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:37 compute-1 python3.9[222237]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 21:00:38 compute-1 sudo[222235]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:38 compute-1 ceph-mon[80135]: pgmap v568: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:00:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:38.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:38 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594004190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:38 compute-1 sudo[222387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqdmmxhsrzseqvgsxzrwojbkzxmqwlly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931638.1363544-2986-30144802588865/AnsiballZ_file.py'
Nov 23 21:00:38 compute-1 sudo[222387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:38 compute-1 python3.9[222389]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 21:00:38 compute-1 sudo[222387]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:38 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:00:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:38 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75880032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:38 compute-1 sudo[222539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swdgjdxsssspelsdfcsaqrxalqmlbaif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931638.7226562-2986-133453198733340/AnsiballZ_file.py'
Nov 23 21:00:38 compute-1 sudo[222539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:39.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:39 compute-1 python3.9[222541]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 21:00:39 compute-1 sudo[222539]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:39 compute-1 sudo[222692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwlidqjoluqczswprukrzjjcquwbxpms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931639.3566215-2986-247222418827256/AnsiballZ_file.py'
Nov 23 21:00:39 compute-1 sudo[222692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:39 compute-1 python3.9[222694]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 21:00:39 compute-1 sudo[222692]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:39 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:00:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:40.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:00:40 compute-1 ceph-mon[80135]: pgmap v569: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:00:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:40 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:40 compute-1 sudo[222844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtqeajjinrprssoyxijktwfioiwnsfpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931640.0034494-2986-225906761136848/AnsiballZ_file.py'
Nov 23 21:00:40 compute-1 sudo[222844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:40 compute-1 sudo[222847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 21:00:40 compute-1 sudo[222847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:00:40 compute-1 sudo[222847]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:40 compute-1 python3.9[222846]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 21:00:40 compute-1 sudo[222844]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:40 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594004190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:41 compute-1 sudo[223036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlgxllwzlihgfsiogaxlhkgoaedxenca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931640.7815833-2986-223723568418737/AnsiballZ_file.py'
Nov 23 21:00:41 compute-1 sudo[223036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:41 compute-1 podman[222995]: 2025-11-23 21:00:41.063609391 +0000 UTC m=+0.061624747 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 21:00:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:41.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:41 compute-1 python3.9[223042]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 21:00:41 compute-1 sudo[223036]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:41 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:00:41 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:00:41 compute-1 sudo[223121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:00:41 compute-1 sudo[223121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:00:41 compute-1 sudo[223121]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:41 compute-1 sudo[223219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvrsvbyzsykrkaivamstcbaehtykecqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931641.4228122-2986-33068999597294/AnsiballZ_file.py'
Nov 23 21:00:41 compute-1 sudo[223219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:41 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7588003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:41 compute-1 python3.9[223221]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 21:00:42 compute-1 sudo[223219]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:42.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:42 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:42 compute-1 ceph-mon[80135]: pgmap v570: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 21:00:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:42 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:43.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:43 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:00:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:43 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594004190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:00:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:44.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:00:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:44 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7588003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:44 compute-1 ceph-mon[80135]: pgmap v571: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:00:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:44 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:00:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:45.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:00:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:45 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:46.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:46 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594004190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:46 compute-1 ceph-mon[80135]: pgmap v572: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 21:00:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:46 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7588003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:47.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:47 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:48 compute-1 sudo[223374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pszhdkblzqbdmluedmfcemrbkalobffa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931647.6505053-3311-121201124028666/AnsiballZ_getent.py'
Nov 23 21:00:48 compute-1 sudo[223374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:48.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:48 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:48 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:48 compute-1 python3.9[223376]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 23 21:00:48 compute-1 sudo[223374]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:48 compute-1 ceph-mon[80135]: pgmap v573: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:00:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:00:48 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:00:48 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:48 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594004190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:49 compute-1 sudo[223527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjrmufmhlauelnurghzsmyibazrwdkzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931648.6990218-3335-270740291040115/AnsiballZ_group.py'
Nov 23 21:00:49 compute-1 sudo[223527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:49.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:49 compute-1 python3.9[223529]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 23 21:00:49 compute-1 groupadd[223530]: group added to /etc/group: name=nova, GID=42436
Nov 23 21:00:49 compute-1 groupadd[223530]: group added to /etc/gshadow: name=nova
Nov 23 21:00:49 compute-1 groupadd[223530]: new group: name=nova, GID=42436
Nov 23 21:00:49 compute-1 sudo[223527]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:49 compute-1 ceph-mon[80135]: pgmap v574: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:00:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:49 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594004190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:00:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:50.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:00:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:50 compute-1 sudo[223686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opdyxqokvupooihtxdnhnnmwldwnbfid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931649.847253-3359-221668625752363/AnsiballZ_user.py'
Nov 23 21:00:50 compute-1 sudo[223686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:00:50 compute-1 python3.9[223688]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 23 21:00:50 compute-1 useradd[223690]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Nov 23 21:00:50 compute-1 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 21:00:50 compute-1 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 21:00:50 compute-1 useradd[223690]: add 'nova' to group 'libvirt'
Nov 23 21:00:50 compute-1 useradd[223690]: add 'nova' to shadow group 'libvirt'
Nov 23 21:00:50 compute-1 sudo[223686]: pam_unix(sudo:session): session closed for user root
Nov 23 21:00:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:00:51.057 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:00:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:00:51.057 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:00:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:00:51.058 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:00:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:51.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:51 compute-1 ceph-osd[77613]: bluestore.MempoolThread fragmentation_score=0.000031 took=0.000034s
Nov 23 21:00:51 compute-1 sshd-session[223723]: Accepted publickey for zuul from 192.168.122.30 port 33454 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 21:00:51 compute-1 systemd-logind[793]: New session 54 of user zuul.
Nov 23 21:00:51 compute-1 systemd[1]: Started Session 54 of User zuul.
Nov 23 21:00:51 compute-1 sshd-session[223723]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 21:00:51 compute-1 sshd-session[223726]: Received disconnect from 192.168.122.30 port 33454:11: disconnected by user
Nov 23 21:00:51 compute-1 sshd-session[223726]: Disconnected from user zuul 192.168.122.30 port 33454
Nov 23 21:00:51 compute-1 sshd-session[223723]: pam_unix(sshd:session): session closed for user zuul
Nov 23 21:00:51 compute-1 systemd[1]: session-54.scope: Deactivated successfully.
Nov 23 21:00:51 compute-1 systemd-logind[793]: Session 54 logged out. Waiting for processes to exit.
Nov 23 21:00:51 compute-1 systemd-logind[793]: Removed session 54.
Nov 23 21:00:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:51 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:00:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:52.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:00:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:52 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:52 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8002130 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:52 compute-1 python3.9[223878]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 21:00:52 compute-1 ceph-mon[80135]: pgmap v575: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 21:00:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:53.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:53 compute-1 python3.9[223999]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931652.124245-3434-273030129110473/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 21:00:53 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:00:53 compute-1 python3.9[224150]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 21:00:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:53 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210053 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 21:00:53 compute-1 ceph-mon[80135]: pgmap v576: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:00:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:54.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:54 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7584000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:54 compute-1 python3.9[224226]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 21:00:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:54 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:54 compute-1 python3.9[224376]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 21:00:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:55.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:55 compute-1 python3.9[224497]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931654.4825194-3434-165862229189253/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 21:00:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:55 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8002130 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:56 compute-1 python3.9[224648]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 21:00:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:56.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:56 compute-1 ceph-mon[80135]: pgmap v577: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 21:00:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:56 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:56 compute-1 python3.9[224769]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931655.5805774-3434-51066565829997/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=bc7f3bb7d4094c596a18178a888511b54e157ba4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 21:00:56 compute-1 podman[224770]: 2025-11-23 21:00:56.650586533 +0000 UTC m=+0.062190073 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 23 21:00:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:56 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75840016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:57.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:57 compute-1 python3.9[224937]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 21:00:57 compute-1 python3.9[225059]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931656.7227464-3434-19654306606951/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 21:00:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:57 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:00:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:58.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:00:58 compute-1 ceph-mon[80135]: pgmap v578: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:00:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:58 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8002ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:58 compute-1 python3.9[225209]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 21:00:58 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:00:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:58 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:00:58 compute-1 python3.9[225330]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931657.9103436-3434-9273580497764/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 21:00:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:00:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:00:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:59.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:00:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:59 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75840016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:00.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:00 compute-1 ceph-mon[80135]: pgmap v579: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:01:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:00 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:00 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8002ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:00 compute-1 sudo[225481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reuniibfmbkmjcmdttuxjhhzhuohygwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931660.704669-3683-240044280228838/AnsiballZ_file.py'
Nov 23 21:01:00 compute-1 sudo[225481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:01:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:01:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:01.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:01:01 compute-1 python3.9[225483]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 21:01:01 compute-1 sudo[225481]: pam_unix(sudo:session): session closed for user root
Nov 23 21:01:01 compute-1 CROND[225485]: (root) CMD (run-parts /etc/cron.hourly)
Nov 23 21:01:01 compute-1 run-parts[225494]: (/etc/cron.hourly) starting 0anacron
Nov 23 21:01:01 compute-1 anacron[225519]: Anacron started on 2025-11-23
Nov 23 21:01:01 compute-1 anacron[225519]: Will run job `cron.daily' in 47 min.
Nov 23 21:01:01 compute-1 anacron[225519]: Will run job `cron.weekly' in 67 min.
Nov 23 21:01:01 compute-1 anacron[225519]: Will run job `cron.monthly' in 87 min.
Nov 23 21:01:01 compute-1 anacron[225519]: Jobs will be executed sequentially
Nov 23 21:01:01 compute-1 run-parts[225521]: (/etc/cron.hourly) finished 0anacron
Nov 23 21:01:01 compute-1 CROND[225484]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 23 21:01:01 compute-1 sudo[225553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:01:01 compute-1 sudo[225553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:01:01 compute-1 sudo[225553]: pam_unix(sudo:session): session closed for user root
Nov 23 21:01:01 compute-1 sudo[225674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zspgolkxgdhmlwpeududswnamuhpbgai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931661.5694764-3707-266470697985733/AnsiballZ_copy.py'
Nov 23 21:01:01 compute-1 sudo[225674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:01:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:01 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:02 compute-1 python3.9[225676]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 21:01:02 compute-1 sudo[225674]: pam_unix(sudo:session): session closed for user root
Nov 23 21:01:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:01:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:02.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:01:02 compute-1 ceph-mon[80135]: pgmap v580: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 21:01:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:02 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:02 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:02 compute-1 sudo[225826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bthiseixvnxnjhiuyusmhqqputkvgeeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931662.539928-3732-269543721325283/AnsiballZ_stat.py'
Nov 23 21:01:02 compute-1 sudo[225826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:01:03 compute-1 python3.9[225828]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 21:01:03 compute-1 sudo[225826]: pam_unix(sudo:session): session closed for user root
Nov 23 21:01:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:01:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:03.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:01:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:03 : epoch 692375ba : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 21:01:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:01:03 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:01:03 compute-1 sudo[225979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pehswvaekixkpkfzbqjfeivcryajflvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931663.4229288-3755-224818914187681/AnsiballZ_stat.py'
Nov 23 21:01:03 compute-1 sudo[225979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:01:03 compute-1 python3.9[225981]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 21:01:03 compute-1 sudo[225979]: pam_unix(sudo:session): session closed for user root
Nov 23 21:01:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:03 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8002ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:04.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:04 compute-1 ceph-mon[80135]: pgmap v581: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 21:01:04 compute-1 sudo[226102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irfyojjoidnxvghibecmsdqjmnntflxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931663.4229288-3755-224818914187681/AnsiballZ_copy.py'
Nov 23 21:01:04 compute-1 sudo[226102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:01:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:04 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:04 compute-1 python3.9[226104]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1763931663.4229288-3755-224818914187681/.source _original_basename=.t2p5yfun follow=False checksum=7c7d8744d02362b5febd08bf84fb657e50088a13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 23 21:01:04 compute-1 sudo[226102]: pam_unix(sudo:session): session closed for user root
Nov 23 21:01:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:04 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7584002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:05.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:05 compute-1 podman[226231]: 2025-11-23 21:01:05.639953073 +0000 UTC m=+0.082472200 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 21:01:05 compute-1 python3.9[226272]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 21:01:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:05 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:01:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:06.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:01:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:06 : epoch 692375ba : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 21:01:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:06 : epoch 692375ba : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 21:01:06 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 21:01:06 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3928 writes, 21K keys, 3928 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.05 MB/s
                                           Cumulative WAL: 3928 writes, 3928 syncs, 1.00 writes per sync, written: 0.05 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1458 writes, 6861 keys, 1458 commit groups, 1.0 writes per commit group, ingest: 16.43 MB, 0.03 MB/s
                                           Interval WAL: 1458 writes, 1458 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     60.1      0.55              0.08        10    0.055       0      0       0.0       0.0
                                             L6      1/0   12.76 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5     96.4     82.0      1.42              0.30         9    0.158     43K   4824       0.0       0.0
                                            Sum      1/0   12.76 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.5     69.4     75.9      1.97              0.38        19    0.104     43K   4824       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.5    100.3    100.2      0.65              0.17         8    0.081     22K   2563       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     96.4     82.0      1.42              0.30         9    0.158     43K   4824       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     60.4      0.55              0.08         9    0.061       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.033, interval 0.012
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.15 GB write, 0.12 MB/s write, 0.13 GB read, 0.11 MB/s read, 2.0 seconds
                                           Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560649e57350#2 capacity: 304.00 MB usage: 8.60 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 8.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(482,8.23 MB,2.70677%) FilterBlock(19,130.80 KB,0.0420169%) IndexBlock(19,251.02 KB,0.0806357%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 23 21:01:06 compute-1 ceph-mon[80135]: pgmap v582: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 21:01:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:06 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8003f40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:06 compute-1 python3.9[226435]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 21:01:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:06 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:07 compute-1 python3.9[226556]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931666.1309242-3833-72308968981228/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=4c77b2c041a7564aa2c84115117dc8517e9bb9ef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 21:01:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:07.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:07 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7584002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:08 compute-1 python3.9[226707]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 21:01:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:01:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:08.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:01:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:08 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009de0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:08 compute-1 ceph-mon[80135]: pgmap v583: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 21:01:08 compute-1 python3.9[226828]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931667.6287699-3879-104748204440611/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=941d5739094d046b86479403aeaaf0441b82ba11 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 21:01:08 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:01:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:08 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8003f40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:01:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:09.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:01:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:09 : epoch 692375ba : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 21:01:09 compute-1 ceph-mon[80135]: pgmap v584: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 21:01:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:09 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:10 compute-1 sudo[226979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfodurtbdeqbczkkiidkmilzfuivxegm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931669.4915893-3930-163727293215428/AnsiballZ_container_config_data.py'
Nov 23 21:01:10 compute-1 sudo[226979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:01:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:10.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:10 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:10 compute-1 python3.9[226981]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 23 21:01:10 compute-1 sudo[226979]: pam_unix(sudo:session): session closed for user root
Nov 23 21:01:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:10 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:10 compute-1 sudo[227131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnoobmfqiaivnfqexvhwogdbkqylbuav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931670.5496871-3956-257946464952033/AnsiballZ_container_config_hash.py'
Nov 23 21:01:10 compute-1 sudo[227131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:01:11 compute-1 python3.9[227133]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 21:01:11 compute-1 sudo[227131]: pam_unix(sudo:session): session closed for user root
Nov 23 21:01:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:11.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:11 compute-1 podman[227172]: 2025-11-23 21:01:11.660069448 +0000 UTC m=+0.071856419 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3)
Nov 23 21:01:11 compute-1 sudo[227304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnfvfpvzyictxccyltwidwrbkmlokdft ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763931671.5904198-3986-45572149003725/AnsiballZ_edpm_container_manage.py'
Nov 23 21:01:11 compute-1 sudo[227304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:01:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:11 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8003f40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:12 compute-1 ceph-mon[80135]: pgmap v585: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 21:01:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:12.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:12 compute-1 python3[227306]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 21:01:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:12 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7584003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:12 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:13.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:13 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:01:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:13 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:14.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:14 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8004c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:14 compute-1 ceph-mon[80135]: pgmap v586: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 21:01:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:14 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7584003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:01:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:15.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:01:15 compute-1 ceph-mon[80135]: pgmap v587: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 21:01:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210115 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 21:01:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:15 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:16.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:16 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:16 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8004c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:01:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:17.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:01:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:17 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7584003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:18.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:18 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:18 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:01:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:18 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009e60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:19.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:19 compute-1 ceph-mon[80135]: pgmap v588: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 23 21:01:19 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:01:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:19 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8004c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:20.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:20 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7584003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:20 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7584003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:21.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:21 compute-1 sudo[227386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:01:21 compute-1 sudo[227386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:01:21 compute-1 sudo[227386]: pam_unix(sudo:session): session closed for user root
Nov 23 21:01:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:21 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:22.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:22 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8004c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:22 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7584003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:01:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:23.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:01:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:23 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7584003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:24.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:24 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:24 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7588001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:24 compute-1 ceph-mon[80135]: pgmap v589: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 23 21:01:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:25.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:25 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:01:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:25 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f757c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:01:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:26.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:01:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:26 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:26 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:27.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:27 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:01:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:28.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:01:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:28 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f757c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:28 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:29.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:29 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:01:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:01:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:30.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:01:30 compute-1 kernel: ganesha.nfsd[227414]: segfault at 50 ip 00007f765e4d732e sp 00007f762cff8210 error 4 in libntirpc.so.5.8[7f765e4bc000+2c000] likely on CPU 4 (core 0, socket 4)
Nov 23 21:01:30 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 21:01:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:30 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594002ad0 fd 39 proxy ignored for local
Nov 23 21:01:30 compute-1 systemd[1]: Started Process Core Dump (PID 227431/UID 0).
Nov 23 21:01:31 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:01:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:31.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:01:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:32.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:01:32 compute-1 systemd-coredump[227432]: Process 212134 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 58:
                                                    #0  0x00007f765e4d732e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 23 21:01:32 compute-1 ceph-mon[80135]: pgmap v590: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 23 21:01:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:33.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:33 compute-1 systemd[1]: systemd-coredump@7-227431-0.service: Deactivated successfully.
Nov 23 21:01:33 compute-1 systemd[1]: systemd-coredump@7-227431-0.service: Consumed 1.354s CPU time.
Nov 23 21:01:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:34.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:35.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:35 compute-1 ceph-mon[80135]: pgmap v591: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 23 21:01:35 compute-1 ceph-mon[80135]: pgmap v592: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 23 21:01:35 compute-1 ceph-mon[80135]: pgmap v593: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:01:35 compute-1 ceph-mon[80135]: pgmap v594: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:01:35 compute-1 ceph-mon[80135]: pgmap v595: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:01:35 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:01:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:36.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:36 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:01:36 compute-1 podman[227439]: 2025-11-23 21:01:36.451711506 +0000 UTC m=+2.463528163 container died 36fdc947acb0f74c6cb2dbe393a95acb88d4327855046b049a9b027d3568eb16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 23 21:01:36 compute-1 systemd[1]: var-lib-containers-storage-overlay-63ac0bfe19a9b8adea475ad895db306f3a67519182ff8422f85cbb367036cc4d-merged.mount: Deactivated successfully.
Nov 23 21:01:36 compute-1 podman[227419]: 2025-11-23 21:01:36.523655316 +0000 UTC m=+8.919421606 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 23 21:01:36 compute-1 podman[227439]: 2025-11-23 21:01:36.530630801 +0000 UTC m=+2.542447448 container remove 36fdc947acb0f74c6cb2dbe393a95acb88d4327855046b049a9b027d3568eb16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 21:01:36 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Main process exited, code=exited, status=139/n/a
Nov 23 21:01:36 compute-1 podman[227320]: 2025-11-23 21:01:36.548272449 +0000 UTC m=+24.314196028 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 23 21:01:36 compute-1 ceph-mon[80135]: pgmap v596: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:01:36 compute-1 ceph-mon[80135]: pgmap v597: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 23 21:01:36 compute-1 podman[227457]: 2025-11-23 21:01:36.653617695 +0000 UTC m=+0.130603368 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 23 21:01:36 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Failed with result 'exit-code'.
Nov 23 21:01:36 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.574s CPU time.
Nov 23 21:01:36 compute-1 podman[227529]: 2025-11-23 21:01:36.732206641 +0000 UTC m=+0.050745528 container create 3eab058616580740aadc24acbbd43c84853a46eb879fdefff975864a15415e9c (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute_init, org.label-schema.license=GPLv2, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 23 21:01:36 compute-1 podman[227529]: 2025-11-23 21:01:36.703247652 +0000 UTC m=+0.021786559 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 23 21:01:36 compute-1 python3[227306]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 23 21:01:36 compute-1 sudo[227304]: pam_unix(sudo:session): session closed for user root
Nov 23 21:01:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:37.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:37 compute-1 sudo[227722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hldhtgphfwrpnwygvpgscxecatjdozzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931697.0726411-4010-215362314422947/AnsiballZ_stat.py'
Nov 23 21:01:37 compute-1 sudo[227722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:01:37 compute-1 python3.9[227724]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 21:01:37 compute-1 sudo[227722]: pam_unix(sudo:session): session closed for user root
Nov 23 21:01:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:38.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210138 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 21:01:38 compute-1 ceph-mon[80135]: pgmap v598: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:01:38 compute-1 sudo[227877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iflvbkfyvnbtjepiwdzxtijqkjkurfnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931698.4276106-4046-51059941044477/AnsiballZ_container_config_data.py'
Nov 23 21:01:38 compute-1 sudo[227877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:01:38 compute-1 python3.9[227879]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 23 21:01:38 compute-1 sudo[227877]: pam_unix(sudo:session): session closed for user root
Nov 23 21:01:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:39.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:39 compute-1 sudo[228030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcjnsfsqbjpahibboyxzvjpxiyrereaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931699.3744907-4073-30520057209891/AnsiballZ_container_config_hash.py'
Nov 23 21:01:39 compute-1 sudo[228030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:01:39 compute-1 ceph-mon[80135]: pgmap v599: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:01:39 compute-1 python3.9[228032]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 21:01:39 compute-1 sudo[228030]: pam_unix(sudo:session): session closed for user root
Nov 23 21:01:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:01:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:40.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:01:40 compute-1 sudo[228182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gonstzmndwavhrdjvrlrdrrmgfdhszgt ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763931700.3879974-4103-278601788293556/AnsiballZ_edpm_container_manage.py'
Nov 23 21:01:40 compute-1 sudo[228182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:01:40 compute-1 sudo[228185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:01:40 compute-1 sudo[228185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:01:40 compute-1 sudo[228185]: pam_unix(sudo:session): session closed for user root
Nov 23 21:01:40 compute-1 sudo[228210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 21:01:40 compute-1 sudo[228210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:01:40 compute-1 python3[228184]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 21:01:41 compute-1 podman[228280]: 2025-11-23 21:01:41.129141586 +0000 UTC m=+0.046640168 container create e57bd1d81dfaddb9e853d32adec041960bea7d0beff0b4ed65acc6be6ec8e0df (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 21:01:41 compute-1 podman[228280]: 2025-11-23 21:01:41.103479325 +0000 UTC m=+0.020977937 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 23 21:01:41 compute-1 python3[228184]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 kolla_start
Nov 23 21:01:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:41.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:41 compute-1 sudo[228182]: pam_unix(sudo:session): session closed for user root
Nov 23 21:01:41 compute-1 sudo[228210]: pam_unix(sudo:session): session closed for user root
Nov 23 21:01:41 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:01:41 compute-1 sudo[228414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:01:41 compute-1 sudo[228414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:01:41 compute-1 sudo[228414]: pam_unix(sudo:session): session closed for user root
Nov 23 21:01:41 compute-1 podman[228455]: 2025-11-23 21:01:41.953801863 +0000 UTC m=+0.064654057 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 21:01:42 compute-1 sudo[228532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdtsdfsmcofchsmiqcjpibrwowgojkls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931701.7354326-4127-201454431206759/AnsiballZ_stat.py'
Nov 23 21:01:42 compute-1 sudo[228532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:01:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:01:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:42.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:01:42 compute-1 python3.9[228534]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 21:01:42 compute-1 sudo[228532]: pam_unix(sudo:session): session closed for user root
Nov 23 21:01:42 compute-1 ceph-mon[80135]: pgmap v600: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:01:42 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:01:42 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:01:42 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:01:42 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 21:01:42 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:01:42 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:01:42 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 21:01:42 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 21:01:42 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:01:42 compute-1 sudo[228686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiemminscklohsbjoynardqdsqowpiga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931702.6478279-4154-234335321827233/AnsiballZ_file.py'
Nov 23 21:01:42 compute-1 sudo[228686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:01:43 compute-1 python3.9[228688]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 21:01:43 compute-1 sudo[228686]: pam_unix(sudo:session): session closed for user root
Nov 23 21:01:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:43.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:43 compute-1 sudo[228838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irmbouiipisctfymdixrrjjlgcbjotui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931703.1955934-4154-164626617065143/AnsiballZ_copy.py'
Nov 23 21:01:43 compute-1 sudo[228838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:01:43 compute-1 python3.9[228840]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763931703.1955934-4154-164626617065143/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 21:01:43 compute-1 sudo[228838]: pam_unix(sudo:session): session closed for user root
Nov 23 21:01:44 compute-1 sudo[228914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdlpglzadrrbddvburhsvyttqyvhbmgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931703.1955934-4154-164626617065143/AnsiballZ_systemd.py'
Nov 23 21:01:44 compute-1 sudo[228914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:01:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:01:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:44.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:01:44 compute-1 python3.9[228916]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 21:01:44 compute-1 systemd[1]: Reloading.
Nov 23 21:01:44 compute-1 ceph-mon[80135]: pgmap v601: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:01:44 compute-1 systemd-rc-local-generator[228936]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 21:01:44 compute-1 systemd-sysv-generator[228941]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 21:01:44 compute-1 sudo[228914]: pam_unix(sudo:session): session closed for user root
Nov 23 21:01:45 compute-1 sudo[229026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnjeyrbbtjwjssvrlrcpbquphzkblror ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931703.1955934-4154-164626617065143/AnsiballZ_systemd.py'
Nov 23 21:01:45 compute-1 sudo[229026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:01:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:45.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:45 compute-1 python3.9[229028]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 21:01:45 compute-1 systemd[1]: Reloading.
Nov 23 21:01:45 compute-1 systemd-rc-local-generator[229054]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 21:01:45 compute-1 systemd-sysv-generator[229058]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 21:01:45 compute-1 systemd[1]: Starting nova_compute container...
Nov 23 21:01:45 compute-1 systemd[1]: Started libcrun container.
Nov 23 21:01:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5f9a839ebcc2f2be232b73ad095168158bbf494a95d293329848704129b6cc/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 23 21:01:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5f9a839ebcc2f2be232b73ad095168158bbf494a95d293329848704129b6cc/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 23 21:01:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5f9a839ebcc2f2be232b73ad095168158bbf494a95d293329848704129b6cc/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 23 21:01:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5f9a839ebcc2f2be232b73ad095168158bbf494a95d293329848704129b6cc/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 21:01:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5f9a839ebcc2f2be232b73ad095168158bbf494a95d293329848704129b6cc/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 23 21:01:45 compute-1 podman[229069]: 2025-11-23 21:01:45.850776263 +0000 UTC m=+0.093993171 container init e57bd1d81dfaddb9e853d32adec041960bea7d0beff0b4ed65acc6be6ec8e0df (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, config_id=edpm)
Nov 23 21:01:45 compute-1 podman[229069]: 2025-11-23 21:01:45.857283376 +0000 UTC m=+0.100500254 container start e57bd1d81dfaddb9e853d32adec041960bea7d0beff0b4ed65acc6be6ec8e0df (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=nova_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 21:01:45 compute-1 podman[229069]: nova_compute
Nov 23 21:01:45 compute-1 nova_compute[229084]: + sudo -E kolla_set_configs
Nov 23 21:01:45 compute-1 systemd[1]: Started nova_compute container.
Nov 23 21:01:45 compute-1 sudo[229026]: pam_unix(sudo:session): session closed for user root
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Validating config file
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Copying service configuration files
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Deleting /etc/ceph
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Creating directory /etc/ceph
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Setting permission for /etc/ceph
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Writing out command to execute
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 23 21:01:45 compute-1 nova_compute[229084]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 23 21:01:45 compute-1 nova_compute[229084]: ++ cat /run_command
Nov 23 21:01:45 compute-1 nova_compute[229084]: + CMD=nova-compute
Nov 23 21:01:45 compute-1 nova_compute[229084]: + ARGS=
Nov 23 21:01:45 compute-1 nova_compute[229084]: + sudo kolla_copy_cacerts
Nov 23 21:01:45 compute-1 nova_compute[229084]: Running command: 'nova-compute'
Nov 23 21:01:45 compute-1 nova_compute[229084]: + [[ ! -n '' ]]
Nov 23 21:01:45 compute-1 nova_compute[229084]: + . kolla_extend_start
Nov 23 21:01:45 compute-1 nova_compute[229084]: + echo 'Running command: '\''nova-compute'\'''
Nov 23 21:01:45 compute-1 nova_compute[229084]: + umask 0022
Nov 23 21:01:45 compute-1 nova_compute[229084]: + exec nova-compute
Nov 23 21:01:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:01:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:46.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:01:46 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:01:46 compute-1 ceph-mon[80135]: pgmap v602: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 21:01:46 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Scheduled restart job, restart counter is at 8.
Nov 23 21:01:46 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 21:01:46 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.574s CPU time.
Nov 23 21:01:46 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 21:01:47 compute-1 podman[229170]: 2025-11-23 21:01:47.035743465 +0000 UTC m=+0.039246995 container create a20cc2100a0ce143f194bbe51ab7e7ee427f407c69a4b8a256f1b12ed5026683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 21:01:47 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced5f9ab2d01d0012c20a9d3e4190fdc56bf8f17f77de53f364aa847543f0855/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 21:01:47 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced5f9ab2d01d0012c20a9d3e4190fdc56bf8f17f77de53f364aa847543f0855/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 21:01:47 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced5f9ab2d01d0012c20a9d3e4190fdc56bf8f17f77de53f364aa847543f0855/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 21:01:47 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced5f9ab2d01d0012c20a9d3e4190fdc56bf8f17f77de53f364aa847543f0855/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.fuxuha-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 21:01:47 compute-1 podman[229170]: 2025-11-23 21:01:47.101838802 +0000 UTC m=+0.105342352 container init a20cc2100a0ce143f194bbe51ab7e7ee427f407c69a4b8a256f1b12ed5026683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 21:01:47 compute-1 podman[229170]: 2025-11-23 21:01:47.10705214 +0000 UTC m=+0.110555670 container start a20cc2100a0ce143f194bbe51ab7e7ee427f407c69a4b8a256f1b12ed5026683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 21:01:47 compute-1 bash[229170]: a20cc2100a0ce143f194bbe51ab7e7ee427f407c69a4b8a256f1b12ed5026683
Nov 23 21:01:47 compute-1 podman[229170]: 2025-11-23 21:01:47.018601658 +0000 UTC m=+0.022105188 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 21:01:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:01:47 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 21:01:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:01:47 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 21:01:47 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 21:01:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:01:47 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 21:01:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:01:47 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 21:01:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:01:47 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 21:01:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:01:47 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 21:01:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:01:47 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 21:01:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:01:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:47.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:01:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:01:47 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 21:01:47 compute-1 python3.9[229352]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 21:01:47 compute-1 sudo[229354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 21:01:47 compute-1 sudo[229354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:01:47 compute-1 sudo[229354]: pam_unix(sudo:session): session closed for user root
Nov 23 21:01:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:48.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:48 compute-1 nova_compute[229084]: 2025-11-23 21:01:48.445 229088 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 23 21:01:48 compute-1 nova_compute[229084]: 2025-11-23 21:01:48.445 229088 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 23 21:01:48 compute-1 nova_compute[229084]: 2025-11-23 21:01:48.445 229088 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 23 21:01:48 compute-1 nova_compute[229084]: 2025-11-23 21:01:48.446 229088 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 23 21:01:48 compute-1 ceph-mon[80135]: pgmap v603: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 21:01:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:01:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:01:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:01:48 compute-1 nova_compute[229084]: 2025-11-23 21:01:48.624 229088 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:01:48 compute-1 nova_compute[229084]: 2025-11-23 21:01:48.658 229088 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:01:48 compute-1 nova_compute[229084]: 2025-11-23 21:01:48.658 229088 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 23 21:01:48 compute-1 python3.9[229530]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 21:01:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:49.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.273 229088 INFO nova.virt.driver [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.529 229088 INFO nova.compute.provider_config [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.538 229088 DEBUG oslo_concurrency.lockutils [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.539 229088 DEBUG oslo_concurrency.lockutils [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.539 229088 DEBUG oslo_concurrency.lockutils [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.539 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.539 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.540 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.540 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.540 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.540 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.540 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.540 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.540 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.541 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.541 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.541 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.541 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.541 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.541 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.541 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.542 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.542 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.542 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.542 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.542 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.542 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.542 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.543 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.543 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.543 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.543 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.543 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.543 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.544 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.544 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.544 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.544 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.544 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.544 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.545 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.545 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.545 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.546 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.546 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.546 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.546 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.547 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.547 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.547 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.547 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.547 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.547 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.548 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.548 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.548 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.548 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.548 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.548 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.549 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.549 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.549 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.549 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.549 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.549 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.550 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.550 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.550 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.550 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.550 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.551 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.551 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.551 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.551 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.551 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.551 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.552 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.552 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.552 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.552 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.552 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.552 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.552 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.553 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.553 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.553 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.553 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.553 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.553 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.554 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.554 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.554 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.554 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.554 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.555 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.555 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.555 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.555 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.555 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.555 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.556 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.556 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.556 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.556 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.556 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.556 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.556 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.557 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.557 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.557 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.557 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.557 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.557 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.557 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.558 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.558 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.558 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.558 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.558 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.558 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.558 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.559 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.559 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.559 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.559 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.559 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.559 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.559 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.560 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.560 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.560 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.560 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.560 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.560 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.560 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.561 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.561 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.561 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.561 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.561 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.561 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.561 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.562 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.562 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.562 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.562 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.562 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.562 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.562 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.563 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.563 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.563 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.563 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.563 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.563 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.564 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.564 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.564 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.564 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.564 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.564 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.565 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.565 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.565 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.565 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.565 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.565 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.565 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.566 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.566 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.566 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.566 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.566 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.566 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.567 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.567 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.567 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.567 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.567 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.567 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.567 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.568 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.568 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.568 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.568 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.568 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.568 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.569 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.569 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.569 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.569 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.569 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.569 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.569 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.570 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.570 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.570 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.570 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.570 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.570 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.571 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.571 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.571 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.571 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.571 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.571 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.572 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.572 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.572 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.572 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.572 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.572 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.572 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.572 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.573 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.573 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.573 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.573 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.573 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.573 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.574 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.574 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.574 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.574 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.574 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.574 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.574 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.575 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.575 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.575 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.575 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.575 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.575 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.576 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.576 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.576 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.576 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.576 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.576 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.576 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.577 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.577 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.577 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.577 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.577 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.577 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.577 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.578 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.578 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.578 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.578 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.578 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.578 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.579 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.579 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.579 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.579 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.579 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.579 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.579 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.580 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.580 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.580 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.580 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.580 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.580 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.580 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.581 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.581 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.581 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.581 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.581 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.581 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.582 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.582 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.582 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.582 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.582 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.582 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.582 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.582 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.583 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.583 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.583 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.583 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.583 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.583 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.584 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.584 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.584 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.584 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.584 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.584 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.584 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.585 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.585 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.585 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.585 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.585 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.585 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.585 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.586 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.586 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.586 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.586 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.586 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.586 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.587 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.587 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.587 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.587 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.587 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.587 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.588 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.588 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.588 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.588 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.588 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.588 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.588 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.589 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.589 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.589 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.589 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.589 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.589 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.589 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.590 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.590 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.590 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.590 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.590 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.590 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.591 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.591 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.591 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.591 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.591 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.591 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.592 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.592 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.592 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.592 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.592 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.592 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.592 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.593 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.593 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.593 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.593 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.593 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.593 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.593 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.594 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.594 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.594 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.594 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.594 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.595 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.595 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.595 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.595 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.595 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.596 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.596 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.596 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.596 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.596 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.596 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.596 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.597 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.597 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.597 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.597 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.597 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.597 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.597 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.598 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.598 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.598 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.598 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.598 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.598 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.598 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.599 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.599 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.599 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.599 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.599 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.599 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.599 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.600 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.600 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.600 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.600 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.600 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.600 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.601 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.601 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.601 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.601 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.601 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.601 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.601 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.601 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.602 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.602 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.602 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.602 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.602 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.602 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.602 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.603 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.603 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.603 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.603 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.603 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.603 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.603 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.604 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.604 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.604 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.604 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.604 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.604 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.604 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.605 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.605 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.605 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.605 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.605 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.605 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.605 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.605 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.606 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.606 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.606 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.606 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.606 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.606 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.607 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.607 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.607 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.607 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.607 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.607 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.607 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.608 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.608 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.608 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.608 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.608 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.609 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.609 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.609 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.609 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.609 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.610 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.610 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.610 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.610 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.610 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.610 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.611 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.611 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.611 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.611 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.611 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.611 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.611 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.612 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.612 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.612 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.612 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.612 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.612 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.612 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.613 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.613 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.613 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.613 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.613 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.613 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.614 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.614 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.614 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.614 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.614 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.615 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.615 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.615 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.615 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.615 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.615 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.615 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.616 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.616 229088 WARNING oslo_config.cfg [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 23 21:01:49 compute-1 nova_compute[229084]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 23 21:01:49 compute-1 nova_compute[229084]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 23 21:01:49 compute-1 nova_compute[229084]: and ``live_migration_inbound_addr`` respectively.
Nov 23 21:01:49 compute-1 nova_compute[229084]: ).  Its value may be silently ignored in the future.
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.616 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.616 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.617 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.617 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.617 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.617 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.617 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.618 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.618 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.618 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.618 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.618 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.619 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.619 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.619 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.619 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.619 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.620 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.620 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.rbd_secret_uuid        = 03808be8-ae4a-5548-82e6-4a294f1bc627 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.620 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.620 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.620 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.620 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.620 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.621 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.621 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.621 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.621 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.621 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.621 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.622 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.622 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.622 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.622 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.622 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.622 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.623 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.623 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.623 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.623 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.623 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.623 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.624 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.624 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.624 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.624 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.624 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.624 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.624 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.625 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.625 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.625 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.625 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.625 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.625 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.625 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.626 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.626 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.626 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.626 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.626 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.626 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.626 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.627 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.627 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.627 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.627 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.627 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.627 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.627 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.628 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.628 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.628 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.628 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.628 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.628 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.628 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.629 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.629 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.629 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.629 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.629 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.629 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.629 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.630 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.630 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.630 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.630 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.630 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.630 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.631 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.631 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.631 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.631 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.631 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.631 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.631 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.632 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.632 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.632 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.632 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.632 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.632 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.632 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.632 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.633 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.633 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.633 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.633 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.633 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.633 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.634 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.634 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.634 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.634 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.634 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.635 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.635 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.635 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.635 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.635 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.635 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.636 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.636 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.636 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.636 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.636 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.636 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.636 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.637 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.637 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.637 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.637 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.637 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.637 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.637 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.638 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.638 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.638 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.638 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.638 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.638 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.639 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.639 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.639 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.639 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.639 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.639 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.640 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.640 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.640 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.640 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.640 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.641 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.641 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.641 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.641 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.641 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.642 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.642 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.642 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.642 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.642 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.643 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.643 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.643 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.643 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.643 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.643 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.643 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.644 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.644 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.644 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.644 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.644 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.644 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.645 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.645 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.645 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.645 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.645 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.645 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.646 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.646 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.646 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.646 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.646 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.646 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.647 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.647 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.647 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.647 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.647 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.647 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.647 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.648 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.648 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.648 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.648 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.648 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.648 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.648 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.649 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.649 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.649 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.649 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.649 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.649 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.649 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.650 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.650 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.650 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.650 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.650 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.650 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.650 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.651 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.651 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.651 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.651 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.651 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.651 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.651 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.652 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.652 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.652 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.652 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.652 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.652 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.653 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.653 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.653 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.653 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.653 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.653 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.653 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.654 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.654 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.654 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.654 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.654 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.654 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.654 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.655 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.655 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.655 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.655 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.655 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.655 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.656 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.656 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.656 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.656 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.656 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.656 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.657 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.657 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.657 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.657 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.657 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.657 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.657 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.658 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.658 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.658 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.658 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.658 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.658 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.659 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.659 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.659 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.659 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.659 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.659 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.660 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.660 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.660 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.660 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.660 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.660 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.660 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.661 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.661 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.661 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.661 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.661 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.661 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.661 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.662 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.662 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.662 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.662 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.662 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.662 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.662 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.663 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.663 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.663 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.663 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.663 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.663 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.663 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.664 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.664 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.664 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.664 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.664 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.664 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.665 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.665 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.665 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.665 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.665 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.665 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.665 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.665 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.666 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.666 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.666 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.666 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.666 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.666 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.666 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.667 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.667 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.667 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.667 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.667 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.667 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.667 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.668 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.668 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.668 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.668 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.668 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.668 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.668 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.669 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.669 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.669 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.669 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.669 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.669 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.670 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.670 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.670 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.670 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.670 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.670 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.670 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.671 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.671 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.671 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.671 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.671 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.671 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.671 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.672 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.672 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.672 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.672 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.672 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.672 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.672 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.673 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.673 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.673 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.673 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.673 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.673 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.673 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.674 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.674 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.674 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.674 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.674 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.674 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.674 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.675 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.675 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.675 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.675 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.675 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.675 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.675 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.676 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.676 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.676 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.676 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.676 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.676 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.676 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.677 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.677 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.677 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.677 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.677 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.677 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.678 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.678 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.678 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.678 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.678 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.678 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.678 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.679 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.679 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.679 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.679 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.679 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.679 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.679 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.680 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.680 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.680 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.680 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.680 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.680 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.680 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.681 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.681 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.681 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.681 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.681 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.682 229088 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 23 21:01:49 compute-1 python3.9[229682]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.696 229088 DEBUG nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.697 229088 DEBUG nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.697 229088 DEBUG nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.698 229088 DEBUG nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 23 21:01:49 compute-1 systemd[1]: Starting libvirt QEMU daemon...
Nov 23 21:01:49 compute-1 systemd[1]: Started libvirt QEMU daemon.
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.775 229088 DEBUG nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f4c6b5afcd0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.777 229088 DEBUG nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f4c6b5afcd0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.777 229088 INFO nova.virt.libvirt.driver [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Connection event '1' reason 'None'
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.836 229088 WARNING nova.virt.libvirt.driver [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Cannot update service status on host "compute-1.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Nov 23 21:01:49 compute-1 nova_compute[229084]: 2025-11-23 21:01:49.836 229088 DEBUG nova.virt.libvirt.volume.mount [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 23 21:01:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:50.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:50 compute-1 sudo[229893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vytrbnzfnwrldokpxxjdimzmdmzmkyzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931710.15052-4335-106341601056691/AnsiballZ_podman_container.py'
Nov 23 21:01:50 compute-1 sudo[229893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:01:50 compute-1 nova_compute[229084]: 2025-11-23 21:01:50.591 229088 INFO nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Libvirt host capabilities <capabilities>
Nov 23 21:01:50 compute-1 nova_compute[229084]: 
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <host>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <uuid>dffd854b-01ce-4a28-b7a6-32174dbe320c</uuid>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <cpu>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <arch>x86_64</arch>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model>EPYC-Rome-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <vendor>AMD</vendor>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <microcode version='16777317'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <signature family='23' model='49' stepping='0'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature name='x2apic'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature name='tsc-deadline'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature name='osxsave'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature name='hypervisor'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature name='tsc_adjust'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature name='spec-ctrl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature name='stibp'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature name='arch-capabilities'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature name='ssbd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature name='cmp_legacy'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature name='topoext'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature name='virt-ssbd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature name='lbrv'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature name='tsc-scale'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature name='vmcb-clean'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature name='pause-filter'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature name='pfthreshold'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature name='svme-addr-chk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature name='rdctl-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature name='skip-l1dfl-vmentry'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature name='mds-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature name='pschange-mc-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <pages unit='KiB' size='4'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <pages unit='KiB' size='2048'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <pages unit='KiB' size='1048576'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </cpu>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <power_management>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <suspend_mem/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </power_management>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <iommu support='no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <migration_features>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <live/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <uri_transports>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <uri_transport>tcp</uri_transport>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <uri_transport>rdma</uri_transport>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </uri_transports>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </migration_features>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <topology>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <cells num='1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <cell id='0'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:           <memory unit='KiB'>7864320</memory>
Nov 23 21:01:50 compute-1 nova_compute[229084]:           <pages unit='KiB' size='4'>1966080</pages>
Nov 23 21:01:50 compute-1 nova_compute[229084]:           <pages unit='KiB' size='2048'>0</pages>
Nov 23 21:01:50 compute-1 nova_compute[229084]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 23 21:01:50 compute-1 nova_compute[229084]:           <distances>
Nov 23 21:01:50 compute-1 nova_compute[229084]:             <sibling id='0' value='10'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:           </distances>
Nov 23 21:01:50 compute-1 nova_compute[229084]:           <cpus num='8'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:           </cpus>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         </cell>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </cells>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </topology>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <cache>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </cache>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <secmodel>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model>selinux</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <doi>0</doi>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </secmodel>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <secmodel>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model>dac</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <doi>0</doi>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </secmodel>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   </host>
Nov 23 21:01:50 compute-1 nova_compute[229084]: 
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <guest>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <os_type>hvm</os_type>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <arch name='i686'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <wordsize>32</wordsize>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <domain type='qemu'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <domain type='kvm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </arch>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <features>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <pae/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <nonpae/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <acpi default='on' toggle='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <apic default='on' toggle='no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <cpuselection/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <deviceboot/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <disksnapshot default='on' toggle='no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <externalSnapshot/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </features>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   </guest>
Nov 23 21:01:50 compute-1 nova_compute[229084]: 
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <guest>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <os_type>hvm</os_type>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <arch name='x86_64'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <wordsize>64</wordsize>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <domain type='qemu'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <domain type='kvm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </arch>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <features>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <acpi default='on' toggle='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <apic default='on' toggle='no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <cpuselection/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <deviceboot/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <disksnapshot default='on' toggle='no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <externalSnapshot/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </features>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   </guest>
Nov 23 21:01:50 compute-1 nova_compute[229084]: 
Nov 23 21:01:50 compute-1 nova_compute[229084]: </capabilities>
Nov 23 21:01:50 compute-1 nova_compute[229084]: 
Nov 23 21:01:50 compute-1 nova_compute[229084]: 2025-11-23 21:01:50.597 229088 DEBUG nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 23 21:01:50 compute-1 nova_compute[229084]: 2025-11-23 21:01:50.615 229088 DEBUG nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 23 21:01:50 compute-1 nova_compute[229084]: <domainCapabilities>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <path>/usr/libexec/qemu-kvm</path>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <domain>kvm</domain>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <arch>i686</arch>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <vcpu max='4096'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <iothreads supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <os supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <enum name='firmware'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <loader supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='type'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>rom</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>pflash</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='readonly'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>yes</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>no</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='secure'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>no</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </loader>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   </os>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <cpu>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <mode name='host-passthrough' supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='hostPassthroughMigratable'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>on</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>off</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </mode>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <mode name='maximum' supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='maximumMigratable'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>on</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>off</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </mode>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <mode name='host-model' supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <vendor>AMD</vendor>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='x2apic'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='tsc-deadline'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='hypervisor'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='tsc_adjust'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='spec-ctrl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='stibp'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='ssbd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='cmp_legacy'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='overflow-recov'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='succor'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='ibrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='amd-ssbd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='virt-ssbd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='lbrv'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='tsc-scale'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='vmcb-clean'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='flushbyasid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='pause-filter'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='pfthreshold'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='svme-addr-chk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='disable' name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </mode>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <mode name='custom' supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-noTSX'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server-v5'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cooperlake'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cooperlake-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cooperlake-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Denverton'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mpx'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Denverton-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mpx'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Denverton-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Denverton-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Dhyana-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Genoa'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amd-psfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='auto-ibrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='no-nested-data-bp'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='null-sel-clr-base'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='stibp-always-on'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Genoa-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amd-psfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='auto-ibrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='no-nested-data-bp'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='null-sel-clr-base'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='stibp-always-on'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Milan'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Milan-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Milan-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amd-psfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='no-nested-data-bp'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='null-sel-clr-base'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='stibp-always-on'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Rome'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Rome-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Rome-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Rome-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='GraniteRapids'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mcdt-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pbrsb-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='prefetchiti'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='GraniteRapids-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mcdt-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pbrsb-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='prefetchiti'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='GraniteRapids-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx10'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx10-128'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx10-256'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx10-512'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mcdt-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pbrsb-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='prefetchiti'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-noTSX'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-noTSX'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v5'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v6'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v7'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='IvyBridge'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='IvyBridge-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='IvyBridge-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='IvyBridge-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='KnightsMill'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-4fmaps'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-4vnniw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512er'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512pf'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='KnightsMill-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-4fmaps'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-4vnniw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512er'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512pf'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Opteron_G4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fma4'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xop'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Opteron_G4-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fma4'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xop'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Opteron_G5'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fma4'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tbm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xop'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Opteron_G5-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fma4'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tbm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xop'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='SapphireRapids'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='SapphireRapids-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='SapphireRapids-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='SapphireRapids-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='SierraForest'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-ne-convert'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cmpccxadd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mcdt-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pbrsb-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='SierraForest-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-ne-convert'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cmpccxadd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mcdt-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pbrsb-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-v5'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Snowridge'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='core-capability'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mpx'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='split-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Snowridge-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='core-capability'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mpx'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='split-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Snowridge-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='core-capability'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='split-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Snowridge-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='core-capability'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='split-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Snowridge-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='athlon'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnow'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnowext'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='athlon-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnow'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnowext'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='core2duo'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='core2duo-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='coreduo'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='coreduo-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='n270'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='n270-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='phenom'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnow'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnowext'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='phenom-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnow'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnowext'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </mode>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   </cpu>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <memoryBacking supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <enum name='sourceType'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <value>file</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <value>anonymous</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <value>memfd</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   </memoryBacking>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <devices>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <disk supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='diskDevice'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>disk</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>cdrom</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>floppy</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>lun</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='bus'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>fdc</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>scsi</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>usb</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>sata</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='model'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio-transitional</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio-non-transitional</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </disk>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <graphics supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='type'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>vnc</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>egl-headless</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>dbus</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </graphics>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <video supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='modelType'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>vga</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>cirrus</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>none</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>bochs</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>ramfb</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </video>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <hostdev supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='mode'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>subsystem</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='startupPolicy'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>default</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>mandatory</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>requisite</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>optional</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='subsysType'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>usb</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>pci</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>scsi</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='capsType'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='pciBackend'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </hostdev>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <rng supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='model'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio-transitional</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio-non-transitional</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='backendModel'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>random</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>egd</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>builtin</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </rng>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <filesystem supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='driverType'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>path</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>handle</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtiofs</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </filesystem>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <tpm supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='model'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>tpm-tis</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>tpm-crb</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='backendModel'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>emulator</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>external</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='backendVersion'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>2.0</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </tpm>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <redirdev supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='bus'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>usb</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </redirdev>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <channel supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='type'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>pty</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>unix</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </channel>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <crypto supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='model'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='type'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>qemu</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='backendModel'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>builtin</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </crypto>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <interface supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='backendType'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>default</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>passt</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </interface>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <panic supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='model'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>isa</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>hyperv</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </panic>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <console supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='type'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>null</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>vc</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>pty</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>dev</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>file</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>pipe</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>stdio</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>udp</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>tcp</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>unix</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>qemu-vdagent</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>dbus</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </console>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   </devices>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <features>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <gic supported='no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <vmcoreinfo supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <genid supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <backingStoreInput supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <backup supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <async-teardown supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <ps2 supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <sev supported='no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <sgx supported='no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <hyperv supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='features'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>relaxed</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>vapic</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>spinlocks</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>vpindex</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>runtime</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>synic</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>stimer</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>reset</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>vendor_id</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>frequencies</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>reenlightenment</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>tlbflush</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>ipi</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>avic</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>emsr_bitmap</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>xmm_input</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <defaults>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <spinlocks>4095</spinlocks>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <stimer_direct>on</stimer_direct>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <tlbflush_direct>on</tlbflush_direct>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <tlbflush_extended>on</tlbflush_extended>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </defaults>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </hyperv>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <launchSecurity supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='sectype'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>tdx</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </launchSecurity>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   </features>
Nov 23 21:01:50 compute-1 nova_compute[229084]: </domainCapabilities>
Nov 23 21:01:50 compute-1 nova_compute[229084]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 23 21:01:50 compute-1 nova_compute[229084]: 2025-11-23 21:01:50.620 229088 DEBUG nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 23 21:01:50 compute-1 nova_compute[229084]: <domainCapabilities>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <path>/usr/libexec/qemu-kvm</path>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <domain>kvm</domain>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <arch>i686</arch>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <vcpu max='240'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <iothreads supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <os supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <enum name='firmware'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <loader supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='type'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>rom</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>pflash</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='readonly'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>yes</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>no</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='secure'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>no</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </loader>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   </os>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <cpu>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <mode name='host-passthrough' supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='hostPassthroughMigratable'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>on</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>off</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </mode>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <mode name='maximum' supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='maximumMigratable'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>on</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>off</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </mode>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <mode name='host-model' supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <vendor>AMD</vendor>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='x2apic'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='tsc-deadline'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='hypervisor'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='tsc_adjust'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='spec-ctrl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='stibp'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='ssbd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='cmp_legacy'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='overflow-recov'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='succor'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='ibrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='amd-ssbd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='virt-ssbd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='lbrv'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='tsc-scale'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='vmcb-clean'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='flushbyasid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='pause-filter'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='pfthreshold'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='svme-addr-chk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='disable' name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </mode>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <mode name='custom' supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-noTSX'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server-v5'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cooperlake'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cooperlake-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cooperlake-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Denverton'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mpx'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Denverton-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mpx'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Denverton-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Denverton-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Dhyana-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Genoa'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amd-psfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='auto-ibrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='no-nested-data-bp'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='null-sel-clr-base'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='stibp-always-on'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Genoa-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amd-psfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='auto-ibrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='no-nested-data-bp'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='null-sel-clr-base'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='stibp-always-on'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Milan'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Milan-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Milan-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amd-psfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='no-nested-data-bp'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='null-sel-clr-base'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='stibp-always-on'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Rome'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Rome-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Rome-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Rome-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='GraniteRapids'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mcdt-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pbrsb-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='prefetchiti'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='GraniteRapids-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mcdt-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pbrsb-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='prefetchiti'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='GraniteRapids-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx10'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx10-128'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx10-256'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx10-512'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mcdt-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pbrsb-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='prefetchiti'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-noTSX'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-noTSX'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v5'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v6'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v7'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='IvyBridge'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='IvyBridge-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='IvyBridge-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='IvyBridge-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='KnightsMill'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-4fmaps'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-4vnniw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512er'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512pf'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='KnightsMill-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-4fmaps'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-4vnniw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512er'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512pf'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Opteron_G4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fma4'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xop'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Opteron_G4-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fma4'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xop'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Opteron_G5'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fma4'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tbm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xop'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Opteron_G5-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fma4'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tbm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xop'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='SapphireRapids'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='SapphireRapids-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='SapphireRapids-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='SapphireRapids-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='SierraForest'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-ne-convert'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cmpccxadd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mcdt-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pbrsb-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='SierraForest-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-ne-convert'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cmpccxadd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mcdt-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pbrsb-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-v5'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Snowridge'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='core-capability'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mpx'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='split-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Snowridge-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='core-capability'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mpx'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='split-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Snowridge-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='core-capability'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='split-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Snowridge-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='core-capability'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='split-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Snowridge-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='athlon'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnow'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnowext'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='athlon-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnow'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnowext'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='core2duo'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='core2duo-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='coreduo'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='coreduo-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='n270'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='n270-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='phenom'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnow'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnowext'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='phenom-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnow'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnowext'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </mode>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   </cpu>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <memoryBacking supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <enum name='sourceType'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <value>file</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <value>anonymous</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <value>memfd</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   </memoryBacking>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <devices>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <disk supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='diskDevice'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>disk</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>cdrom</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>floppy</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>lun</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='bus'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>ide</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>fdc</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>scsi</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>usb</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>sata</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='model'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio-transitional</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio-non-transitional</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </disk>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <graphics supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='type'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>vnc</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>egl-headless</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>dbus</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </graphics>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <video supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='modelType'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>vga</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>cirrus</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>none</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>bochs</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>ramfb</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </video>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <hostdev supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='mode'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>subsystem</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='startupPolicy'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>default</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>mandatory</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>requisite</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>optional</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='subsysType'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>usb</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>pci</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>scsi</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='capsType'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='pciBackend'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </hostdev>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <rng supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='model'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio-transitional</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio-non-transitional</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='backendModel'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>random</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>egd</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>builtin</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </rng>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <filesystem supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='driverType'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>path</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>handle</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtiofs</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </filesystem>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <tpm supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='model'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>tpm-tis</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>tpm-crb</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='backendModel'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>emulator</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>external</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='backendVersion'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>2.0</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </tpm>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <redirdev supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='bus'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>usb</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </redirdev>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <channel supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='type'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>pty</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>unix</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </channel>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <crypto supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='model'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='type'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>qemu</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='backendModel'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>builtin</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </crypto>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <interface supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='backendType'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>default</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>passt</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </interface>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <panic supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='model'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>isa</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>hyperv</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </panic>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <console supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='type'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>null</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>vc</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>pty</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>dev</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>file</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>pipe</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>stdio</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>udp</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>tcp</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>unix</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>qemu-vdagent</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>dbus</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </console>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   </devices>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <features>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <gic supported='no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <vmcoreinfo supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <genid supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <backingStoreInput supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <backup supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <async-teardown supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <ps2 supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <sev supported='no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <sgx supported='no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <hyperv supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='features'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>relaxed</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>vapic</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>spinlocks</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>vpindex</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>runtime</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>synic</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>stimer</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>reset</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>vendor_id</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>frequencies</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>reenlightenment</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>tlbflush</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>ipi</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>avic</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>emsr_bitmap</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>xmm_input</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <defaults>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <spinlocks>4095</spinlocks>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <stimer_direct>on</stimer_direct>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <tlbflush_direct>on</tlbflush_direct>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <tlbflush_extended>on</tlbflush_extended>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </defaults>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </hyperv>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <launchSecurity supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='sectype'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>tdx</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </launchSecurity>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   </features>
Nov 23 21:01:50 compute-1 nova_compute[229084]: </domainCapabilities>
Nov 23 21:01:50 compute-1 nova_compute[229084]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 23 21:01:50 compute-1 nova_compute[229084]: 2025-11-23 21:01:50.650 229088 DEBUG nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 23 21:01:50 compute-1 nova_compute[229084]: 2025-11-23 21:01:50.653 229088 DEBUG nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 23 21:01:50 compute-1 nova_compute[229084]: <domainCapabilities>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <path>/usr/libexec/qemu-kvm</path>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <domain>kvm</domain>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <arch>x86_64</arch>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <vcpu max='4096'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <iothreads supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <os supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <enum name='firmware'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <value>efi</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <loader supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='type'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>rom</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>pflash</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='readonly'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>yes</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>no</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='secure'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>yes</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>no</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </loader>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   </os>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <cpu>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <mode name='host-passthrough' supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='hostPassthroughMigratable'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>on</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>off</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </mode>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <mode name='maximum' supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='maximumMigratable'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>on</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>off</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </mode>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <mode name='host-model' supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <vendor>AMD</vendor>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='x2apic'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='tsc-deadline'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='hypervisor'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='tsc_adjust'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='spec-ctrl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='stibp'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='ssbd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='cmp_legacy'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='overflow-recov'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='succor'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='ibrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='amd-ssbd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='virt-ssbd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='lbrv'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='tsc-scale'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='vmcb-clean'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='flushbyasid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='pause-filter'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='pfthreshold'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='svme-addr-chk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='disable' name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </mode>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <mode name='custom' supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-noTSX'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server-v5'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cooperlake'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cooperlake-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cooperlake-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Denverton'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mpx'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Denverton-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mpx'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Denverton-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Denverton-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Dhyana-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Genoa'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amd-psfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='auto-ibrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='no-nested-data-bp'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='null-sel-clr-base'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='stibp-always-on'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Genoa-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amd-psfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='auto-ibrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='no-nested-data-bp'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='null-sel-clr-base'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='stibp-always-on'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Milan'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Milan-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Milan-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amd-psfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='no-nested-data-bp'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='null-sel-clr-base'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='stibp-always-on'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Rome'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Rome-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Rome-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Rome-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='GraniteRapids'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mcdt-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pbrsb-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='prefetchiti'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='GraniteRapids-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mcdt-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pbrsb-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='prefetchiti'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='GraniteRapids-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx10'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx10-128'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx10-256'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx10-512'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mcdt-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pbrsb-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='prefetchiti'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-noTSX'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-noTSX'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v5'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v6'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v7'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='IvyBridge'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='IvyBridge-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='IvyBridge-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='IvyBridge-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='KnightsMill'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-4fmaps'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-4vnniw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512er'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512pf'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='KnightsMill-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-4fmaps'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-4vnniw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512er'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512pf'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Opteron_G4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fma4'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xop'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Opteron_G4-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fma4'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xop'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Opteron_G5'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fma4'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tbm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xop'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Opteron_G5-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fma4'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tbm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xop'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='SapphireRapids'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='SapphireRapids-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='SapphireRapids-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='SapphireRapids-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='SierraForest'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-ne-convert'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cmpccxadd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mcdt-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pbrsb-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='SierraForest-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-ne-convert'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cmpccxadd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mcdt-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pbrsb-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-v5'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Snowridge'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='core-capability'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mpx'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='split-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Snowridge-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='core-capability'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mpx'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='split-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Snowridge-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='core-capability'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='split-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Snowridge-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='core-capability'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='split-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Snowridge-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='athlon'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnow'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnowext'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='athlon-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnow'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnowext'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='core2duo'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='core2duo-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='coreduo'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='coreduo-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='n270'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='n270-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='phenom'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnow'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnowext'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='phenom-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnow'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnowext'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </mode>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   </cpu>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <memoryBacking supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <enum name='sourceType'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <value>file</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <value>anonymous</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <value>memfd</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   </memoryBacking>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <devices>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <disk supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='diskDevice'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>disk</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>cdrom</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>floppy</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>lun</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='bus'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>fdc</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>scsi</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>usb</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>sata</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='model'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio-transitional</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio-non-transitional</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </disk>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <graphics supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='type'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>vnc</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>egl-headless</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>dbus</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </graphics>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <video supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='modelType'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>vga</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>cirrus</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>none</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>bochs</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>ramfb</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </video>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <hostdev supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='mode'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>subsystem</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='startupPolicy'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>default</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>mandatory</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>requisite</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>optional</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='subsysType'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>usb</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>pci</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>scsi</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='capsType'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='pciBackend'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </hostdev>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <rng supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='model'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio-transitional</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio-non-transitional</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='backendModel'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>random</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>egd</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>builtin</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </rng>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <filesystem supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='driverType'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>path</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>handle</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtiofs</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </filesystem>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <tpm supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='model'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>tpm-tis</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>tpm-crb</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='backendModel'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>emulator</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>external</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='backendVersion'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>2.0</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </tpm>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <redirdev supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='bus'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>usb</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </redirdev>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <channel supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='type'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>pty</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>unix</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </channel>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <crypto supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='model'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='type'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>qemu</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='backendModel'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>builtin</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </crypto>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <interface supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='backendType'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>default</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>passt</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </interface>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <panic supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='model'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>isa</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>hyperv</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </panic>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <console supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='type'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>null</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>vc</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>pty</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>dev</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>file</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>pipe</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>stdio</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>udp</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>tcp</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>unix</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>qemu-vdagent</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>dbus</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </console>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   </devices>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <features>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <gic supported='no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <vmcoreinfo supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <genid supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <backingStoreInput supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <backup supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <async-teardown supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <ps2 supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <sev supported='no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <sgx supported='no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <hyperv supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='features'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>relaxed</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>vapic</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>spinlocks</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>vpindex</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>runtime</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>synic</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>stimer</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>reset</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>vendor_id</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>frequencies</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>reenlightenment</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>tlbflush</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>ipi</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>avic</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>emsr_bitmap</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>xmm_input</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <defaults>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <spinlocks>4095</spinlocks>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <stimer_direct>on</stimer_direct>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <tlbflush_direct>on</tlbflush_direct>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <tlbflush_extended>on</tlbflush_extended>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </defaults>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </hyperv>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <launchSecurity supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='sectype'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>tdx</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </launchSecurity>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   </features>
Nov 23 21:01:50 compute-1 nova_compute[229084]: </domainCapabilities>
Nov 23 21:01:50 compute-1 nova_compute[229084]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 23 21:01:50 compute-1 nova_compute[229084]: 2025-11-23 21:01:50.719 229088 DEBUG nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 23 21:01:50 compute-1 nova_compute[229084]: <domainCapabilities>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <path>/usr/libexec/qemu-kvm</path>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <domain>kvm</domain>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <arch>x86_64</arch>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <vcpu max='240'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <iothreads supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <os supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <enum name='firmware'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <loader supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='type'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>rom</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>pflash</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='readonly'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>yes</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>no</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='secure'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>no</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </loader>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   </os>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <cpu>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <mode name='host-passthrough' supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='hostPassthroughMigratable'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>on</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>off</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </mode>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <mode name='maximum' supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='maximumMigratable'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>on</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>off</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </mode>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <mode name='host-model' supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <vendor>AMD</vendor>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='x2apic'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='tsc-deadline'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='hypervisor'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='tsc_adjust'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='spec-ctrl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='stibp'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='ssbd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='cmp_legacy'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='overflow-recov'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='succor'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='ibrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='amd-ssbd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='virt-ssbd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='lbrv'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='tsc-scale'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='vmcb-clean'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='flushbyasid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='pause-filter'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='pfthreshold'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='svme-addr-chk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <feature policy='disable' name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </mode>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <mode name='custom' supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-noTSX'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Broadwell-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cascadelake-Server-v5'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cooperlake'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cooperlake-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Cooperlake-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Denverton'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mpx'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Denverton-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mpx'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Denverton-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Denverton-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Dhyana-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Genoa'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amd-psfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='auto-ibrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='no-nested-data-bp'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='null-sel-clr-base'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='stibp-always-on'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Genoa-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amd-psfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='auto-ibrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='no-nested-data-bp'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='null-sel-clr-base'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='stibp-always-on'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Milan'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Milan-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Milan-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amd-psfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='no-nested-data-bp'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='null-sel-clr-base'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='stibp-always-on'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Rome'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Rome-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Rome-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-Rome-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='EPYC-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='GraniteRapids'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mcdt-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pbrsb-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='prefetchiti'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='GraniteRapids-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mcdt-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pbrsb-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='prefetchiti'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='GraniteRapids-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx10'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx10-128'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx10-256'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx10-512'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mcdt-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pbrsb-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='prefetchiti'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-noTSX'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Haswell-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-noTSX'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v5'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v6'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Icelake-Server-v7'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='IvyBridge'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='IvyBridge-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='IvyBridge-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='IvyBridge-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='KnightsMill'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-4fmaps'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-4vnniw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512er'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512pf'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='KnightsMill-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-4fmaps'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-4vnniw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512er'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512pf'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Opteron_G4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fma4'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xop'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Opteron_G4-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fma4'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xop'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Opteron_G5'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fma4'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tbm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xop'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Opteron_G5-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fma4'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tbm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xop'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='SapphireRapids'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='SapphireRapids-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='SapphireRapids-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='SapphireRapids-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='amx-tile'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-bf16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-fp16'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bitalg'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrc'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fzrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='la57'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='taa-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xfd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='SierraForest'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-ne-convert'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cmpccxadd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mcdt-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pbrsb-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='SierraForest-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-ifma'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-ne-convert'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx-vnni-int8'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cmpccxadd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fbsdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='fsrs'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ibrs-all'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mcdt-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pbrsb-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='psdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='serialize'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vaes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Client-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='hle'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='rtm'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Skylake-Server-v5'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512bw'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512cd'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512dq'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512f'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='avx512vl'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='invpcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pcid'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='pku'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Snowridge'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='core-capability'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mpx'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='split-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Snowridge-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='core-capability'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='mpx'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='split-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Snowridge-v2'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='core-capability'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='split-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Snowridge-v3'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='core-capability'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='split-lock-detect'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='Snowridge-v4'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='cldemote'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='erms'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='gfni'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdir64b'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='movdiri'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='xsaves'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='athlon'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnow'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnowext'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='athlon-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnow'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnowext'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='core2duo'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='core2duo-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='coreduo'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='coreduo-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='n270'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='n270-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='ss'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='phenom'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnow'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnowext'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <blockers model='phenom-v1'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnow'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <feature name='3dnowext'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </blockers>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </mode>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   </cpu>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <memoryBacking supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <enum name='sourceType'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <value>file</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <value>anonymous</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <value>memfd</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   </memoryBacking>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <devices>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <disk supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='diskDevice'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>disk</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>cdrom</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>floppy</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>lun</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='bus'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>ide</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>fdc</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>scsi</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>usb</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>sata</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='model'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio-transitional</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio-non-transitional</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </disk>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <graphics supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='type'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>vnc</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>egl-headless</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>dbus</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </graphics>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <video supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='modelType'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>vga</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>cirrus</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>none</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>bochs</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>ramfb</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </video>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <hostdev supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='mode'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>subsystem</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='startupPolicy'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>default</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>mandatory</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>requisite</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>optional</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='subsysType'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>usb</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>pci</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>scsi</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='capsType'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='pciBackend'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </hostdev>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <rng supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='model'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio-transitional</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtio-non-transitional</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='backendModel'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>random</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>egd</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>builtin</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </rng>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <filesystem supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='driverType'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>path</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>handle</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>virtiofs</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </filesystem>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <tpm supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='model'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>tpm-tis</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>tpm-crb</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='backendModel'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>emulator</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>external</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='backendVersion'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>2.0</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </tpm>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <redirdev supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='bus'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>usb</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </redirdev>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <channel supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='type'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>pty</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>unix</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </channel>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <crypto supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='model'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='type'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>qemu</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='backendModel'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>builtin</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </crypto>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <interface supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='backendType'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>default</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>passt</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </interface>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <panic supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='model'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>isa</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>hyperv</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </panic>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <console supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='type'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>null</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>vc</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>pty</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>dev</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>file</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>pipe</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>stdio</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>udp</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>tcp</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>unix</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>qemu-vdagent</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>dbus</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </console>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   </devices>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   <features>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <gic supported='no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <vmcoreinfo supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <genid supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <backingStoreInput supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <backup supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <async-teardown supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <ps2 supported='yes'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <sev supported='no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <sgx supported='no'/>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <hyperv supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='features'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>relaxed</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>vapic</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>spinlocks</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>vpindex</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>runtime</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>synic</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>stimer</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>reset</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>vendor_id</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>frequencies</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>reenlightenment</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>tlbflush</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>ipi</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>avic</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>emsr_bitmap</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>xmm_input</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <defaults>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <spinlocks>4095</spinlocks>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <stimer_direct>on</stimer_direct>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <tlbflush_direct>on</tlbflush_direct>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <tlbflush_extended>on</tlbflush_extended>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </defaults>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </hyperv>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     <launchSecurity supported='yes'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       <enum name='sectype'>
Nov 23 21:01:50 compute-1 nova_compute[229084]:         <value>tdx</value>
Nov 23 21:01:50 compute-1 nova_compute[229084]:       </enum>
Nov 23 21:01:50 compute-1 nova_compute[229084]:     </launchSecurity>
Nov 23 21:01:50 compute-1 nova_compute[229084]:   </features>
Nov 23 21:01:50 compute-1 nova_compute[229084]: </domainCapabilities>
Nov 23 21:01:50 compute-1 nova_compute[229084]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 23 21:01:50 compute-1 nova_compute[229084]: 2025-11-23 21:01:50.783 229088 DEBUG nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 23 21:01:50 compute-1 nova_compute[229084]: 2025-11-23 21:01:50.783 229088 INFO nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Secure Boot support detected
Nov 23 21:01:50 compute-1 nova_compute[229084]: 2025-11-23 21:01:50.786 229088 INFO nova.virt.libvirt.driver [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 23 21:01:50 compute-1 nova_compute[229084]: 2025-11-23 21:01:50.786 229088 INFO nova.virt.libvirt.driver [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 23 21:01:50 compute-1 nova_compute[229084]: 2025-11-23 21:01:50.802 229088 DEBUG nova.virt.libvirt.driver [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 23 21:01:50 compute-1 nova_compute[229084]: 2025-11-23 21:01:50.848 229088 INFO nova.virt.node [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Determined node identity bb217351-d4c8-44a4-9137-08393a1f72bc from /var/lib/nova/compute_id
Nov 23 21:01:50 compute-1 ceph-mon[80135]: pgmap v604: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 21:01:50 compute-1 nova_compute[229084]: 2025-11-23 21:01:50.871 229088 WARNING nova.compute.manager [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Compute nodes ['bb217351-d4c8-44a4-9137-08393a1f72bc'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 23 21:01:50 compute-1 nova_compute[229084]: 2025-11-23 21:01:50.906 229088 INFO nova.compute.manager [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 23 21:01:50 compute-1 nova_compute[229084]: 2025-11-23 21:01:50.940 229088 WARNING nova.compute.manager [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Nov 23 21:01:50 compute-1 nova_compute[229084]: 2025-11-23 21:01:50.941 229088 DEBUG oslo_concurrency.lockutils [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:01:50 compute-1 nova_compute[229084]: 2025-11-23 21:01:50.941 229088 DEBUG oslo_concurrency.lockutils [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:01:50 compute-1 nova_compute[229084]: 2025-11-23 21:01:50.941 229088 DEBUG oslo_concurrency.lockutils [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:01:50 compute-1 nova_compute[229084]: 2025-11-23 21:01:50.942 229088 DEBUG nova.compute.resource_tracker [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:01:50 compute-1 nova_compute[229084]: 2025-11-23 21:01:50.942 229088 DEBUG oslo_concurrency.processutils [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:01:50 compute-1 python3.9[229895]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 23 21:01:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:01:51.058 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:01:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:01:51.059 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:01:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:01:51.059 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:01:51 compute-1 sudo[229893]: pam_unix(sudo:session): session closed for user root
Nov 23 21:01:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.002000052s ======
Nov 23 21:01:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:51.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Nov 23 21:01:51 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:01:51 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1784443819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:01:51 compute-1 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 21:01:51 compute-1 nova_compute[229084]: 2025-11-23 21:01:51.389 229088 DEBUG oslo_concurrency.processutils [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:01:51 compute-1 systemd[1]: Starting libvirt nodedev daemon...
Nov 23 21:01:51 compute-1 systemd[1]: Started libvirt nodedev daemon.
Nov 23 21:01:51 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:01:51 compute-1 nova_compute[229084]: 2025-11-23 21:01:51.667 229088 WARNING nova.virt.libvirt.driver [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:01:51 compute-1 nova_compute[229084]: 2025-11-23 21:01:51.668 229088 DEBUG nova.compute.resource_tracker [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5293MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:01:51 compute-1 nova_compute[229084]: 2025-11-23 21:01:51.669 229088 DEBUG oslo_concurrency.lockutils [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:01:51 compute-1 nova_compute[229084]: 2025-11-23 21:01:51.669 229088 DEBUG oslo_concurrency.lockutils [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:01:51 compute-1 nova_compute[229084]: 2025-11-23 21:01:51.684 229088 WARNING nova.compute.resource_tracker [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] No compute node record for compute-1.ctlplane.example.com:bb217351-d4c8-44a4-9137-08393a1f72bc: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host bb217351-d4c8-44a4-9137-08393a1f72bc could not be found.
Nov 23 21:01:51 compute-1 nova_compute[229084]: 2025-11-23 21:01:51.705 229088 INFO nova.compute.resource_tracker [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Compute node record created for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com with uuid: bb217351-d4c8-44a4-9137-08393a1f72bc
Nov 23 21:01:51 compute-1 sudo[230118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvxzuscvmcmapxxrtnygqsqoyzemghtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931711.453247-4359-258603184447195/AnsiballZ_systemd.py'
Nov 23 21:01:51 compute-1 sudo[230118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:01:51 compute-1 nova_compute[229084]: 2025-11-23 21:01:51.773 229088 DEBUG nova.compute.resource_tracker [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:01:51 compute-1 nova_compute[229084]: 2025-11-23 21:01:51.774 229088 DEBUG nova.compute.resource_tracker [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:01:51 compute-1 ceph-mon[80135]: pgmap v605: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 21:01:51 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1784443819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:01:51 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2496637839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:01:51 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3600869024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:01:52 compute-1 python3.9[230120]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 21:01:52 compute-1 systemd[1]: Stopping nova_compute container...
Nov 23 21:01:52 compute-1 nova_compute[229084]: 2025-11-23 21:01:52.126 229088 DEBUG oslo_concurrency.lockutils [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:01:52 compute-1 nova_compute[229084]: 2025-11-23 21:01:52.127 229088 DEBUG oslo_concurrency.lockutils [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:01:52 compute-1 nova_compute[229084]: 2025-11-23 21:01:52.127 229088 DEBUG oslo_concurrency.lockutils [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:01:52 compute-1 nova_compute[229084]: 2025-11-23 21:01:52.127 229088 DEBUG oslo_concurrency.lockutils [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:01:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:52.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:52 compute-1 virtqemud[229705]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 23 21:01:52 compute-1 virtqemud[229705]: hostname: compute-1
Nov 23 21:01:52 compute-1 virtqemud[229705]: End of file while reading data: Input/output error
Nov 23 21:01:52 compute-1 systemd[1]: libpod-e57bd1d81dfaddb9e853d32adec041960bea7d0beff0b4ed65acc6be6ec8e0df.scope: Deactivated successfully.
Nov 23 21:01:52 compute-1 systemd[1]: libpod-e57bd1d81dfaddb9e853d32adec041960bea7d0beff0b4ed65acc6be6ec8e0df.scope: Consumed 3.644s CPU time.
Nov 23 21:01:52 compute-1 podman[230124]: 2025-11-23 21:01:52.543694819 +0000 UTC m=+0.451103010 container died e57bd1d81dfaddb9e853d32adec041960bea7d0beff0b4ed65acc6be6ec8e0df (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 23 21:01:52 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e57bd1d81dfaddb9e853d32adec041960bea7d0beff0b4ed65acc6be6ec8e0df-userdata-shm.mount: Deactivated successfully.
Nov 23 21:01:52 compute-1 systemd[1]: var-lib-containers-storage-overlay-7d5f9a839ebcc2f2be232b73ad095168158bbf494a95d293329848704129b6cc-merged.mount: Deactivated successfully.
Nov 23 21:01:52 compute-1 podman[230124]: 2025-11-23 21:01:52.81850137 +0000 UTC m=+0.725909551 container cleanup e57bd1d81dfaddb9e853d32adec041960bea7d0beff0b4ed65acc6be6ec8e0df (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 23 21:01:52 compute-1 podman[230124]: nova_compute
Nov 23 21:01:52 compute-1 podman[230154]: nova_compute
Nov 23 21:01:52 compute-1 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 23 21:01:52 compute-1 systemd[1]: Stopped nova_compute container.
Nov 23 21:01:52 compute-1 systemd[1]: Starting nova_compute container...
Nov 23 21:01:53 compute-1 systemd[1]: Started libcrun container.
Nov 23 21:01:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5f9a839ebcc2f2be232b73ad095168158bbf494a95d293329848704129b6cc/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 23 21:01:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5f9a839ebcc2f2be232b73ad095168158bbf494a95d293329848704129b6cc/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 23 21:01:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5f9a839ebcc2f2be232b73ad095168158bbf494a95d293329848704129b6cc/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 23 21:01:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5f9a839ebcc2f2be232b73ad095168158bbf494a95d293329848704129b6cc/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 21:01:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5f9a839ebcc2f2be232b73ad095168158bbf494a95d293329848704129b6cc/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 23 21:01:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:53.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:53 compute-1 podman[230167]: 2025-11-23 21:01:53.337540746 +0000 UTC m=+0.417232989 container init e57bd1d81dfaddb9e853d32adec041960bea7d0beff0b4ed65acc6be6ec8e0df (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 21:01:53 compute-1 podman[230167]: 2025-11-23 21:01:53.343120595 +0000 UTC m=+0.422812808 container start e57bd1d81dfaddb9e853d32adec041960bea7d0beff0b4ed65acc6be6ec8e0df (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 21:01:53 compute-1 nova_compute[230183]: + sudo -E kolla_set_configs
Nov 23 21:01:53 compute-1 podman[230167]: nova_compute
Nov 23 21:01:53 compute-1 systemd[1]: Started nova_compute container.
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Validating config file
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Copying service configuration files
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Deleting /etc/ceph
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Creating directory /etc/ceph
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Setting permission for /etc/ceph
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 23 21:01:53 compute-1 sudo[230118]: pam_unix(sudo:session): session closed for user root
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Writing out command to execute
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 23 21:01:53 compute-1 nova_compute[230183]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 23 21:01:53 compute-1 nova_compute[230183]: ++ cat /run_command
Nov 23 21:01:53 compute-1 nova_compute[230183]: + CMD=nova-compute
Nov 23 21:01:53 compute-1 nova_compute[230183]: + ARGS=
Nov 23 21:01:53 compute-1 nova_compute[230183]: + sudo kolla_copy_cacerts
Nov 23 21:01:53 compute-1 nova_compute[230183]: + [[ ! -n '' ]]
Nov 23 21:01:53 compute-1 nova_compute[230183]: + . kolla_extend_start
Nov 23 21:01:53 compute-1 nova_compute[230183]: Running command: 'nova-compute'
Nov 23 21:01:53 compute-1 nova_compute[230183]: + echo 'Running command: '\''nova-compute'\'''
Nov 23 21:01:53 compute-1 nova_compute[230183]: + umask 0022
Nov 23 21:01:53 compute-1 nova_compute[230183]: + exec nova-compute
Nov 23 21:01:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:01:53 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 21:01:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:01:53 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 21:01:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:54.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:01:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:55.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.271 230187 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.271 230187 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.271 230187 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.272 230187 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.396 230187 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.417 230187 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.418 230187 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 23 21:01:55 compute-1 ceph-mon[80135]: pgmap v606: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.869 230187 INFO nova.virt.driver [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.968 230187 INFO nova.compute.provider_config [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.974 230187 DEBUG oslo_concurrency.lockutils [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.974 230187 DEBUG oslo_concurrency.lockutils [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.974 230187 DEBUG oslo_concurrency.lockutils [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.975 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.975 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.975 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.975 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.976 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.976 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.976 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.976 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.976 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.976 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.976 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.977 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.977 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.977 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.977 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.977 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.977 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.977 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.978 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.978 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.978 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.978 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.978 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.978 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.978 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.979 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.979 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.979 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.979 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.979 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.979 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.980 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.980 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.980 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.980 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.980 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.980 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.981 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.981 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.981 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.981 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.981 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.982 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.982 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.982 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.982 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.982 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.982 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.983 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.983 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.983 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.983 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.983 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.983 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.984 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.984 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.984 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.984 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.984 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.984 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.984 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.984 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.985 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.985 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.985 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.985 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.985 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.985 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.985 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.986 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.986 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.986 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.986 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.986 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.986 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.986 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.987 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.987 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.987 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.987 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.987 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.987 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.987 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.988 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.988 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.988 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.988 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.988 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.988 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.989 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.989 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.989 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.989 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.989 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.989 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.989 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.990 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.990 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.990 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.990 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.990 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.990 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.991 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.991 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.991 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.991 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.991 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.992 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.992 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.992 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.992 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.992 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.992 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.992 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.993 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.993 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.993 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.993 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.993 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.993 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.993 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.994 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.994 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.994 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.994 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.994 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.994 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.994 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.995 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.995 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.995 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.995 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.995 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.995 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.996 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.996 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.996 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.996 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.996 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.996 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.996 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.997 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.997 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.997 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.997 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.997 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.997 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.997 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.998 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.998 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.998 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.998 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.998 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.998 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.999 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.999 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.999 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.999 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:55 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.999 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:55.999 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.000 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.000 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.000 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.000 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.000 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.000 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.000 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.001 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.001 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.001 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.001 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.001 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.001 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.001 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.002 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.002 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.002 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.002 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.002 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.002 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.002 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.003 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.003 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.003 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.003 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.003 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.003 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.004 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.004 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.004 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.004 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.004 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.004 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.004 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.005 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.005 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.005 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.005 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.005 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.005 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.006 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.006 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.006 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.006 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.006 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.006 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.007 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.007 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.007 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.007 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.008 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.008 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.008 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.008 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.008 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.008 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.008 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.009 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.009 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.009 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.009 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.009 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.009 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.010 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.010 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.010 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.010 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.010 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.010 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.011 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.011 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.011 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.011 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.011 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.011 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.012 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.012 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.012 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.012 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.012 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.012 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.012 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.013 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.013 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.013 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.013 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.013 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.013 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.013 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.014 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.014 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.014 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.014 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.014 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.014 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.014 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.015 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.015 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.015 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.015 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.015 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.016 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.016 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.016 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.016 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.016 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.016 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.016 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.017 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.017 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.017 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.017 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.017 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.017 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.017 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.018 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.018 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.018 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.018 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.018 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.018 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.019 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.019 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.019 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.019 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.019 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.019 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.020 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.020 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.020 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.020 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.020 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.020 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.020 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.021 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.021 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.021 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.021 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.021 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.021 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.021 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.022 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.022 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.022 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.022 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.022 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.022 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.022 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.023 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.023 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.023 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.023 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.023 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.023 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.023 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.023 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.024 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.024 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.024 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.024 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.024 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.024 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.024 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.025 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.025 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.025 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.025 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.025 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.025 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.026 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.026 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.026 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.026 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.026 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.026 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.026 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.027 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.027 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.027 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.027 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.027 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.027 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.027 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.028 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.028 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.028 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.028 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.028 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.028 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.028 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.029 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.029 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.029 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.029 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.029 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.030 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.030 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.030 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.030 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.030 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.030 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.030 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.031 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.031 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.031 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.031 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.031 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.031 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.032 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.032 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.032 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.032 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.032 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.032 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.032 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.033 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.033 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.033 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.033 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.033 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.033 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.033 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.034 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.034 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.034 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.034 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.034 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.034 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.034 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.035 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.035 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.035 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.035 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.035 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.035 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.035 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.036 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.036 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.036 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.036 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.036 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.036 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.036 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.037 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.037 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.037 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.037 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.037 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.038 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.038 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.038 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.038 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.038 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.038 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.039 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.039 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.039 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.039 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.039 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.039 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.040 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.040 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.040 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.040 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.040 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.040 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.041 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.041 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.041 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.041 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.041 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.041 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.042 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.042 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.042 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.042 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.042 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.042 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.042 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.043 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.043 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.043 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.043 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.043 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.044 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.044 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.044 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.044 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.044 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.044 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.044 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.045 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.045 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.045 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.045 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.045 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.045 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.045 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.046 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.046 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.046 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.046 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.046 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.046 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.047 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.047 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.047 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.047 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.047 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.047 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.047 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.048 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.048 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.048 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.048 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.048 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.049 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.049 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.049 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.049 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.049 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.049 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.050 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.050 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.050 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.050 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.050 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.050 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.050 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.051 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.051 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.051 230187 WARNING oslo_config.cfg [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 23 21:01:56 compute-1 nova_compute[230183]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 23 21:01:56 compute-1 nova_compute[230183]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 23 21:01:56 compute-1 nova_compute[230183]: and ``live_migration_inbound_addr`` respectively.
Nov 23 21:01:56 compute-1 nova_compute[230183]: ).  Its value may be silently ignored in the future.
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.051 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.051 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.051 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.052 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.052 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.052 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.052 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.052 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.052 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.053 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.053 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.053 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.053 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.053 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.053 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.053 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.054 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.054 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.054 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.rbd_secret_uuid        = 03808be8-ae4a-5548-82e6-4a294f1bc627 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.054 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.054 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.054 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.054 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.055 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.055 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.055 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.055 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.055 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.055 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.055 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.056 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.056 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.056 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.056 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.056 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.056 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.057 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.057 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.057 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.057 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.057 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.057 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.058 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.058 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.058 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.058 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.058 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.058 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.058 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.059 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.059 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.059 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.059 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.059 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.059 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.059 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.060 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.060 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.060 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.060 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.060 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.060 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.060 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.061 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.061 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.061 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.061 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.061 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.061 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.062 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.062 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.062 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.062 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.062 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.062 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.062 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.063 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.063 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.063 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.063 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.063 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.063 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.064 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.064 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.064 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.064 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.064 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.064 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.064 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.065 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.065 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.065 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.065 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.065 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.065 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.066 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.066 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.066 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.066 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.066 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.066 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.067 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.067 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.067 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.067 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.067 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.067 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.067 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.068 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.068 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.068 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.068 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.068 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.068 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.068 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.069 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.069 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.069 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.069 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.069 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.069 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.069 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.070 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.070 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.070 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.070 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.070 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.070 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.071 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.071 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.071 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.071 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.071 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.071 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.071 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.072 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.072 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.072 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.072 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.072 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.073 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.073 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.073 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.073 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.073 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.073 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.073 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.074 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.074 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.074 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.074 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.074 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.074 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.074 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.075 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.075 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.075 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.075 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.075 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.075 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.076 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.076 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.076 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.076 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.076 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.076 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.077 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.077 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.077 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.077 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.077 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.077 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.077 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.077 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.078 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.078 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.078 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.078 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.078 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.079 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.079 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.079 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.079 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.079 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.079 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.080 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.080 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.080 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.080 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.080 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.080 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.080 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.080 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.081 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.081 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.081 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.081 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.081 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.081 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.082 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.082 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.082 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.082 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.082 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.082 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.082 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.083 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.083 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.083 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.083 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.083 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.083 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.083 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.084 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.084 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.084 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.084 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.084 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.084 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.084 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.085 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.085 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.085 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.085 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.085 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.085 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.085 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.086 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.086 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.086 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.086 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.086 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.086 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.086 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.087 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.087 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.087 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.087 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.087 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.087 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.088 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.088 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.088 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.088 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.088 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.088 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.089 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.089 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.089 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.089 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.089 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.090 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.090 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.090 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.090 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.090 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.091 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.091 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.091 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.091 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.091 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.091 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.092 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.092 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.092 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.092 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.092 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.093 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.093 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.093 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.093 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.093 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.093 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.094 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.094 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.094 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.094 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.094 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.094 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.094 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.095 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.095 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.095 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.095 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.095 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.096 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.096 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.096 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.096 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.096 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.096 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.096 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.097 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.097 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.097 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.097 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.097 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.098 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.098 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.098 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.098 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.098 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.098 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.098 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.099 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.099 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.099 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.099 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.099 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.099 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.099 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.100 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.100 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.100 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.100 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.100 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.101 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.101 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.101 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.101 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.101 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.101 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.102 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.102 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.102 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.102 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.102 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.102 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.102 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.103 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.103 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.103 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.103 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.103 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.103 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.104 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.104 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.104 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.104 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.104 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.104 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.105 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.105 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.105 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.105 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.105 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.105 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.106 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.106 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.106 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.106 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.106 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.106 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.106 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.107 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.107 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.107 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.107 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.107 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.107 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.107 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.108 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.108 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.108 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.108 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.108 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.108 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.108 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.109 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.109 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.109 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.109 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.109 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.109 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.109 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.110 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.110 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.110 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.110 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.110 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.110 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.110 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.110 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.111 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.111 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.111 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.111 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.111 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.111 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.112 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.112 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.112 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.112 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.112 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.112 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.112 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.113 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.113 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.113 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.113 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.113 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.113 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.114 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.114 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.114 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.114 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.114 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.114 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.114 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.115 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.115 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.115 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.115 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.115 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.115 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.116 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.116 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.116 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.116 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.116 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.116 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.116 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.117 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.117 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.118 230187 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.132 230187 INFO nova.virt.node [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Determined node identity bb217351-d4c8-44a4-9137-08393a1f72bc from /var/lib/nova/compute_id
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.133 230187 DEBUG nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.134 230187 DEBUG nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.134 230187 DEBUG nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.134 230187 DEBUG nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.146 230187 DEBUG nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fb4f07e6520> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.150 230187 DEBUG nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fb4f07e6520> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.151 230187 INFO nova.virt.libvirt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Connection event '1' reason 'None'
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.159 230187 INFO nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Libvirt host capabilities <capabilities>
Nov 23 21:01:56 compute-1 nova_compute[230183]: 
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <host>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <uuid>dffd854b-01ce-4a28-b7a6-32174dbe320c</uuid>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <cpu>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <arch>x86_64</arch>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model>EPYC-Rome-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <vendor>AMD</vendor>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <microcode version='16777317'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <signature family='23' model='49' stepping='0'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature name='x2apic'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature name='tsc-deadline'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature name='osxsave'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature name='hypervisor'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature name='tsc_adjust'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature name='spec-ctrl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature name='stibp'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature name='arch-capabilities'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature name='ssbd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature name='cmp_legacy'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature name='topoext'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature name='virt-ssbd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature name='lbrv'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature name='tsc-scale'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature name='vmcb-clean'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature name='pause-filter'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature name='pfthreshold'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature name='svme-addr-chk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature name='rdctl-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature name='skip-l1dfl-vmentry'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature name='mds-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature name='pschange-mc-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <pages unit='KiB' size='4'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <pages unit='KiB' size='2048'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <pages unit='KiB' size='1048576'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </cpu>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <power_management>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <suspend_mem/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </power_management>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <iommu support='no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <migration_features>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <live/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <uri_transports>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <uri_transport>tcp</uri_transport>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <uri_transport>rdma</uri_transport>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </uri_transports>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </migration_features>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <topology>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <cells num='1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <cell id='0'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:           <memory unit='KiB'>7864320</memory>
Nov 23 21:01:56 compute-1 nova_compute[230183]:           <pages unit='KiB' size='4'>1966080</pages>
Nov 23 21:01:56 compute-1 nova_compute[230183]:           <pages unit='KiB' size='2048'>0</pages>
Nov 23 21:01:56 compute-1 nova_compute[230183]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 23 21:01:56 compute-1 nova_compute[230183]:           <distances>
Nov 23 21:01:56 compute-1 nova_compute[230183]:             <sibling id='0' value='10'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:           </distances>
Nov 23 21:01:56 compute-1 nova_compute[230183]:           <cpus num='8'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:           </cpus>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         </cell>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </cells>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </topology>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <cache>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </cache>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <secmodel>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model>selinux</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <doi>0</doi>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </secmodel>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <secmodel>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model>dac</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <doi>0</doi>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </secmodel>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   </host>
Nov 23 21:01:56 compute-1 nova_compute[230183]: 
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <guest>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <os_type>hvm</os_type>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <arch name='i686'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <wordsize>32</wordsize>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <domain type='qemu'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <domain type='kvm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </arch>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <features>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <pae/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <nonpae/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <acpi default='on' toggle='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <apic default='on' toggle='no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <cpuselection/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <deviceboot/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <disksnapshot default='on' toggle='no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <externalSnapshot/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </features>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   </guest>
Nov 23 21:01:56 compute-1 nova_compute[230183]: 
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <guest>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <os_type>hvm</os_type>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <arch name='x86_64'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <wordsize>64</wordsize>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <domain type='qemu'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <domain type='kvm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </arch>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <features>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <acpi default='on' toggle='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <apic default='on' toggle='no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <cpuselection/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <deviceboot/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <disksnapshot default='on' toggle='no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <externalSnapshot/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </features>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   </guest>
Nov 23 21:01:56 compute-1 nova_compute[230183]: 
Nov 23 21:01:56 compute-1 nova_compute[230183]: </capabilities>
Nov 23 21:01:56 compute-1 nova_compute[230183]: 
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.161 230187 DEBUG nova.virt.libvirt.volume.mount [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 23 21:01:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:56.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.168 230187 DEBUG nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.171 230187 DEBUG nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 23 21:01:56 compute-1 nova_compute[230183]: <domainCapabilities>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <path>/usr/libexec/qemu-kvm</path>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <domain>kvm</domain>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <arch>i686</arch>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <vcpu max='4096'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <iothreads supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <os supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <enum name='firmware'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <loader supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='type'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>rom</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>pflash</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='readonly'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>yes</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>no</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='secure'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>no</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </loader>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   </os>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <cpu>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <mode name='host-passthrough' supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='hostPassthroughMigratable'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>on</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>off</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </mode>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <mode name='maximum' supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='maximumMigratable'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>on</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>off</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </mode>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <mode name='host-model' supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <vendor>AMD</vendor>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='x2apic'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='tsc-deadline'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='hypervisor'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='tsc_adjust'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='spec-ctrl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='stibp'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='ssbd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='cmp_legacy'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='overflow-recov'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='succor'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='ibrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='amd-ssbd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='virt-ssbd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='lbrv'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='tsc-scale'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='vmcb-clean'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='flushbyasid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='pause-filter'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='pfthreshold'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='svme-addr-chk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='disable' name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </mode>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <mode name='custom' supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-noTSX'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server-v5'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cooperlake'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cooperlake-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cooperlake-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Denverton'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mpx'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Denverton-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mpx'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Denverton-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Denverton-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Dhyana-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Genoa'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amd-psfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='auto-ibrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='no-nested-data-bp'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='null-sel-clr-base'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='stibp-always-on'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Genoa-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amd-psfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='auto-ibrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='no-nested-data-bp'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='null-sel-clr-base'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='stibp-always-on'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Milan'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Milan-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Milan-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amd-psfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='no-nested-data-bp'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='null-sel-clr-base'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='stibp-always-on'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Rome'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Rome-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Rome-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Rome-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='GraniteRapids'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mcdt-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pbrsb-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='prefetchiti'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='GraniteRapids-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mcdt-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pbrsb-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='prefetchiti'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='GraniteRapids-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx10'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx10-128'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx10-256'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx10-512'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mcdt-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pbrsb-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='prefetchiti'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-noTSX'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-noTSX'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v5'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v6'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v7'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='IvyBridge'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='IvyBridge-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='IvyBridge-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='IvyBridge-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='KnightsMill'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-4fmaps'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-4vnniw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512er'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512pf'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='KnightsMill-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-4fmaps'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-4vnniw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512er'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512pf'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Opteron_G4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fma4'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xop'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Opteron_G4-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fma4'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xop'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Opteron_G5'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fma4'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tbm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xop'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Opteron_G5-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fma4'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tbm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xop'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='SapphireRapids'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='SapphireRapids-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='SapphireRapids-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='SapphireRapids-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='SierraForest'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-ne-convert'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cmpccxadd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mcdt-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pbrsb-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='SierraForest-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-ne-convert'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cmpccxadd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mcdt-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pbrsb-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-v5'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Snowridge'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='core-capability'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mpx'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='split-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Snowridge-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='core-capability'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mpx'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='split-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Snowridge-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='core-capability'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='split-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Snowridge-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='core-capability'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='split-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Snowridge-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='athlon'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnow'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnowext'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='athlon-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnow'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnowext'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='core2duo'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='core2duo-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='coreduo'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='coreduo-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='n270'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='n270-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='phenom'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnow'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnowext'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='phenom-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnow'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnowext'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </mode>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   </cpu>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <memoryBacking supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <enum name='sourceType'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <value>file</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <value>anonymous</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <value>memfd</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   </memoryBacking>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <devices>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <disk supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='diskDevice'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>disk</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>cdrom</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>floppy</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>lun</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='bus'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>fdc</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>scsi</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>usb</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>sata</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='model'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio-transitional</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio-non-transitional</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <graphics supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='type'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>vnc</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>egl-headless</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>dbus</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </graphics>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <video supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='modelType'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>vga</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>cirrus</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>none</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>bochs</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>ramfb</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </video>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <hostdev supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='mode'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>subsystem</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='startupPolicy'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>default</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>mandatory</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>requisite</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>optional</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='subsysType'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>usb</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>pci</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>scsi</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='capsType'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='pciBackend'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </hostdev>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <rng supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='model'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio-transitional</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio-non-transitional</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='backendModel'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>random</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>egd</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>builtin</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </rng>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <filesystem supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='driverType'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>path</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>handle</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtiofs</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </filesystem>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <tpm supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='model'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>tpm-tis</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>tpm-crb</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='backendModel'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>emulator</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>external</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='backendVersion'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>2.0</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </tpm>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <redirdev supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='bus'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>usb</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </redirdev>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <channel supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='type'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>pty</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>unix</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </channel>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <crypto supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='model'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='type'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>qemu</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='backendModel'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>builtin</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </crypto>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <interface supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='backendType'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>default</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>passt</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </interface>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <panic supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='model'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>isa</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>hyperv</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </panic>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <console supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='type'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>null</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>vc</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>pty</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>dev</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>file</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>pipe</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>stdio</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>udp</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>tcp</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>unix</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>qemu-vdagent</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>dbus</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </console>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   </devices>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <features>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <gic supported='no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <vmcoreinfo supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <genid supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <backingStoreInput supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <backup supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <async-teardown supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <ps2 supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <sev supported='no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <sgx supported='no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <hyperv supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='features'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>relaxed</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>vapic</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>spinlocks</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>vpindex</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>runtime</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>synic</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>stimer</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>reset</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>vendor_id</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>frequencies</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>reenlightenment</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>tlbflush</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>ipi</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>avic</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>emsr_bitmap</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>xmm_input</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <defaults>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <spinlocks>4095</spinlocks>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <stimer_direct>on</stimer_direct>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <tlbflush_direct>on</tlbflush_direct>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <tlbflush_extended>on</tlbflush_extended>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </defaults>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </hyperv>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <launchSecurity supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='sectype'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>tdx</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </launchSecurity>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   </features>
Nov 23 21:01:56 compute-1 nova_compute[230183]: </domainCapabilities>
Nov 23 21:01:56 compute-1 nova_compute[230183]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.178 230187 DEBUG nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 23 21:01:56 compute-1 nova_compute[230183]: <domainCapabilities>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <path>/usr/libexec/qemu-kvm</path>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <domain>kvm</domain>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <arch>i686</arch>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <vcpu max='240'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <iothreads supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <os supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <enum name='firmware'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <loader supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='type'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>rom</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>pflash</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='readonly'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>yes</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>no</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='secure'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>no</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </loader>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   </os>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <cpu>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <mode name='host-passthrough' supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='hostPassthroughMigratable'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>on</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>off</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </mode>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <mode name='maximum' supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='maximumMigratable'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>on</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>off</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </mode>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <mode name='host-model' supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <vendor>AMD</vendor>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='x2apic'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='tsc-deadline'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='hypervisor'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='tsc_adjust'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='spec-ctrl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='stibp'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='ssbd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='cmp_legacy'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='overflow-recov'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='succor'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='ibrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='amd-ssbd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='virt-ssbd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='lbrv'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='tsc-scale'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='vmcb-clean'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='flushbyasid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='pause-filter'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='pfthreshold'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='svme-addr-chk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='disable' name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </mode>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <mode name='custom' supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-noTSX'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server-v5'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cooperlake'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cooperlake-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cooperlake-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Denverton'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mpx'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Denverton-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mpx'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Denverton-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Denverton-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Dhyana-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Genoa'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amd-psfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='auto-ibrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='no-nested-data-bp'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='null-sel-clr-base'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='stibp-always-on'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Genoa-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amd-psfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='auto-ibrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='no-nested-data-bp'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='null-sel-clr-base'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='stibp-always-on'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Milan'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Milan-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Milan-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amd-psfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='no-nested-data-bp'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='null-sel-clr-base'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='stibp-always-on'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Rome'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Rome-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Rome-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Rome-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='GraniteRapids'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mcdt-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pbrsb-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='prefetchiti'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='GraniteRapids-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mcdt-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pbrsb-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='prefetchiti'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='GraniteRapids-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx10'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx10-128'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx10-256'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx10-512'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mcdt-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pbrsb-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='prefetchiti'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-noTSX'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-noTSX'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v5'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v6'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v7'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='IvyBridge'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='IvyBridge-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='IvyBridge-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='IvyBridge-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='KnightsMill'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-4fmaps'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-4vnniw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512er'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512pf'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='KnightsMill-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-4fmaps'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-4vnniw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512er'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512pf'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Opteron_G4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fma4'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xop'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Opteron_G4-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fma4'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xop'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Opteron_G5'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fma4'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tbm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xop'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Opteron_G5-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fma4'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tbm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xop'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='SapphireRapids'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='SapphireRapids-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='SapphireRapids-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='SapphireRapids-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='SierraForest'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-ne-convert'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cmpccxadd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mcdt-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pbrsb-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='SierraForest-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-ne-convert'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cmpccxadd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mcdt-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pbrsb-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-v5'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Snowridge'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='core-capability'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mpx'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='split-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Snowridge-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='core-capability'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mpx'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='split-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Snowridge-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='core-capability'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='split-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Snowridge-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='core-capability'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='split-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Snowridge-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='athlon'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnow'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnowext'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='athlon-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnow'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnowext'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='core2duo'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='core2duo-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='coreduo'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='coreduo-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='n270'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='n270-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='phenom'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnow'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnowext'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='phenom-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnow'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnowext'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </mode>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   </cpu>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <memoryBacking supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <enum name='sourceType'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <value>file</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <value>anonymous</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <value>memfd</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   </memoryBacking>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <devices>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <disk supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='diskDevice'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>disk</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>cdrom</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>floppy</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>lun</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='bus'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>ide</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>fdc</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>scsi</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>usb</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>sata</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='model'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio-transitional</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio-non-transitional</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <graphics supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='type'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>vnc</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>egl-headless</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>dbus</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </graphics>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <video supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='modelType'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>vga</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>cirrus</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>none</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>bochs</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>ramfb</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </video>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <hostdev supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='mode'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>subsystem</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='startupPolicy'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>default</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>mandatory</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>requisite</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>optional</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='subsysType'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>usb</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>pci</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>scsi</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='capsType'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='pciBackend'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </hostdev>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <rng supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='model'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio-transitional</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio-non-transitional</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='backendModel'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>random</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>egd</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>builtin</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </rng>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <filesystem supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='driverType'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>path</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>handle</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtiofs</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </filesystem>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <tpm supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='model'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>tpm-tis</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>tpm-crb</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='backendModel'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>emulator</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>external</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='backendVersion'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>2.0</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </tpm>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <redirdev supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='bus'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>usb</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </redirdev>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <channel supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='type'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>pty</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>unix</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </channel>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <crypto supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='model'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='type'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>qemu</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='backendModel'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>builtin</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </crypto>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <interface supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='backendType'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>default</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>passt</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </interface>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <panic supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='model'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>isa</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>hyperv</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </panic>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <console supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='type'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>null</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>vc</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>pty</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>dev</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>file</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>pipe</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>stdio</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>udp</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>tcp</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>unix</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>qemu-vdagent</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>dbus</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </console>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   </devices>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <features>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <gic supported='no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <vmcoreinfo supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <genid supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <backingStoreInput supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <backup supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <async-teardown supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <ps2 supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <sev supported='no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <sgx supported='no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <hyperv supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='features'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>relaxed</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>vapic</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>spinlocks</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>vpindex</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>runtime</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>synic</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>stimer</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>reset</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>vendor_id</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>frequencies</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>reenlightenment</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>tlbflush</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>ipi</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>avic</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>emsr_bitmap</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>xmm_input</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <defaults>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <spinlocks>4095</spinlocks>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <stimer_direct>on</stimer_direct>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <tlbflush_direct>on</tlbflush_direct>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <tlbflush_extended>on</tlbflush_extended>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </defaults>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </hyperv>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <launchSecurity supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='sectype'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>tdx</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </launchSecurity>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   </features>
Nov 23 21:01:56 compute-1 nova_compute[230183]: </domainCapabilities>
Nov 23 21:01:56 compute-1 nova_compute[230183]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.208 230187 DEBUG nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.212 230187 DEBUG nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 23 21:01:56 compute-1 nova_compute[230183]: <domainCapabilities>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <path>/usr/libexec/qemu-kvm</path>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <domain>kvm</domain>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <arch>x86_64</arch>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <vcpu max='4096'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <iothreads supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <os supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <enum name='firmware'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <value>efi</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <loader supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='type'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>rom</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>pflash</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='readonly'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>yes</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>no</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='secure'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>yes</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>no</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </loader>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   </os>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <cpu>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <mode name='host-passthrough' supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='hostPassthroughMigratable'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>on</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>off</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </mode>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <mode name='maximum' supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='maximumMigratable'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>on</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>off</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </mode>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <mode name='host-model' supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <vendor>AMD</vendor>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='x2apic'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='tsc-deadline'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='hypervisor'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='tsc_adjust'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='spec-ctrl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='stibp'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='ssbd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='cmp_legacy'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='overflow-recov'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='succor'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='ibrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='amd-ssbd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='virt-ssbd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='lbrv'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='tsc-scale'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='vmcb-clean'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='flushbyasid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='pause-filter'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='pfthreshold'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='svme-addr-chk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='disable' name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </mode>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <mode name='custom' supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-noTSX'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server-v5'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cooperlake'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cooperlake-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cooperlake-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Denverton'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mpx'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Denverton-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mpx'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Denverton-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Denverton-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Dhyana-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Genoa'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amd-psfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='auto-ibrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='no-nested-data-bp'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='null-sel-clr-base'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='stibp-always-on'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Genoa-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amd-psfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='auto-ibrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='no-nested-data-bp'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='null-sel-clr-base'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='stibp-always-on'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Milan'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Milan-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Milan-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amd-psfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='no-nested-data-bp'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='null-sel-clr-base'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='stibp-always-on'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Rome'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Rome-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Rome-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Rome-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='GraniteRapids'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mcdt-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pbrsb-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='prefetchiti'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='GraniteRapids-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mcdt-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pbrsb-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='prefetchiti'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='GraniteRapids-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx10'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx10-128'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx10-256'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx10-512'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mcdt-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pbrsb-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='prefetchiti'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-noTSX'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-noTSX'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 21:01:56 compute-1 sudo[230372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooutrgcyxswxuktjfycmtofpgvvgvpqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763931716.049628-4385-241763016616076/AnsiballZ_podman_container.py'
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 sudo[230372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v5'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v6'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v7'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='IvyBridge'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='IvyBridge-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='IvyBridge-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='IvyBridge-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='KnightsMill'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-4fmaps'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-4vnniw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512er'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512pf'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='KnightsMill-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-4fmaps'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-4vnniw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512er'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512pf'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Opteron_G4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fma4'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xop'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Opteron_G4-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fma4'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xop'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Opteron_G5'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fma4'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tbm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xop'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Opteron_G5-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fma4'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tbm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xop'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='SapphireRapids'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='SapphireRapids-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='SapphireRapids-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='SapphireRapids-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='SierraForest'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-ne-convert'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cmpccxadd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mcdt-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pbrsb-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='SierraForest-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-ne-convert'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cmpccxadd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mcdt-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pbrsb-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-v5'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Snowridge'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='core-capability'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mpx'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='split-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Snowridge-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='core-capability'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mpx'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='split-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Snowridge-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='core-capability'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='split-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Snowridge-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='core-capability'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='split-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Snowridge-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='athlon'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnow'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnowext'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='athlon-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnow'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnowext'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='core2duo'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='core2duo-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='coreduo'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='coreduo-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='n270'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='n270-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='phenom'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnow'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnowext'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='phenom-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnow'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnowext'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </mode>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   </cpu>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <memoryBacking supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <enum name='sourceType'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <value>file</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <value>anonymous</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <value>memfd</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   </memoryBacking>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <devices>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <disk supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='diskDevice'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>disk</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>cdrom</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>floppy</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>lun</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='bus'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>fdc</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>scsi</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>usb</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>sata</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='model'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio-transitional</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio-non-transitional</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <graphics supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='type'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>vnc</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>egl-headless</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>dbus</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </graphics>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <video supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='modelType'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>vga</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>cirrus</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>none</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>bochs</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>ramfb</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </video>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <hostdev supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='mode'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>subsystem</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='startupPolicy'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>default</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>mandatory</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>requisite</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>optional</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='subsysType'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>usb</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>pci</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>scsi</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='capsType'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='pciBackend'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </hostdev>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <rng supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='model'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio-transitional</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio-non-transitional</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='backendModel'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>random</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>egd</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>builtin</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </rng>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <filesystem supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='driverType'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>path</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>handle</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtiofs</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </filesystem>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <tpm supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='model'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>tpm-tis</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>tpm-crb</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='backendModel'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>emulator</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>external</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='backendVersion'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>2.0</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </tpm>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <redirdev supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='bus'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>usb</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </redirdev>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <channel supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='type'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>pty</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>unix</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </channel>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <crypto supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='model'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='type'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>qemu</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='backendModel'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>builtin</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </crypto>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <interface supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='backendType'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>default</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>passt</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </interface>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <panic supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='model'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>isa</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>hyperv</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </panic>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <console supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='type'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>null</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>vc</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>pty</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>dev</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>file</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>pipe</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>stdio</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>udp</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>tcp</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>unix</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>qemu-vdagent</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>dbus</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </console>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   </devices>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <features>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <gic supported='no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <vmcoreinfo supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <genid supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <backingStoreInput supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <backup supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <async-teardown supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <ps2 supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <sev supported='no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <sgx supported='no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <hyperv supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='features'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>relaxed</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>vapic</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>spinlocks</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>vpindex</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>runtime</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>synic</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>stimer</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>reset</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>vendor_id</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>frequencies</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>reenlightenment</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>tlbflush</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>ipi</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>avic</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>emsr_bitmap</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>xmm_input</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <defaults>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <spinlocks>4095</spinlocks>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <stimer_direct>on</stimer_direct>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <tlbflush_direct>on</tlbflush_direct>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <tlbflush_extended>on</tlbflush_extended>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </defaults>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </hyperv>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <launchSecurity supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='sectype'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>tdx</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </launchSecurity>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   </features>
Nov 23 21:01:56 compute-1 nova_compute[230183]: </domainCapabilities>
Nov 23 21:01:56 compute-1 nova_compute[230183]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.277 230187 DEBUG nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 23 21:01:56 compute-1 nova_compute[230183]: <domainCapabilities>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <path>/usr/libexec/qemu-kvm</path>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <domain>kvm</domain>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <arch>x86_64</arch>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <vcpu max='240'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <iothreads supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <os supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <enum name='firmware'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <loader supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='type'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>rom</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>pflash</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='readonly'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>yes</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>no</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='secure'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>no</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </loader>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   </os>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <cpu>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <mode name='host-passthrough' supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='hostPassthroughMigratable'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>on</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>off</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </mode>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <mode name='maximum' supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='maximumMigratable'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>on</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>off</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </mode>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <mode name='host-model' supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <vendor>AMD</vendor>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='x2apic'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='tsc-deadline'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='hypervisor'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='tsc_adjust'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='spec-ctrl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='stibp'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='ssbd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='cmp_legacy'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='overflow-recov'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='succor'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='ibrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='amd-ssbd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='virt-ssbd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='lbrv'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='tsc-scale'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='vmcb-clean'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='flushbyasid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='pause-filter'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='pfthreshold'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='svme-addr-chk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <feature policy='disable' name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </mode>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <mode name='custom' supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-noTSX'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Broadwell-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cascadelake-Server-v5'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cooperlake'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cooperlake-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Cooperlake-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Denverton'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mpx'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Denverton-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mpx'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Denverton-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Denverton-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Dhyana-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Genoa'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amd-psfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='auto-ibrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='no-nested-data-bp'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='null-sel-clr-base'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='stibp-always-on'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Genoa-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amd-psfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='auto-ibrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='no-nested-data-bp'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='null-sel-clr-base'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='stibp-always-on'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Milan'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Milan-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Milan-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amd-psfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='no-nested-data-bp'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='null-sel-clr-base'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='stibp-always-on'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Rome'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Rome-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Rome-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-Rome-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='EPYC-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='GraniteRapids'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mcdt-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pbrsb-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='prefetchiti'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='GraniteRapids-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mcdt-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pbrsb-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='prefetchiti'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='GraniteRapids-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx10'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx10-128'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx10-256'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx10-512'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mcdt-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pbrsb-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='prefetchiti'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-noTSX'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Haswell-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-noTSX'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v5'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v6'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Icelake-Server-v7'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='IvyBridge'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='IvyBridge-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='IvyBridge-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='IvyBridge-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='KnightsMill'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-4fmaps'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-4vnniw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512er'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512pf'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='KnightsMill-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-4fmaps'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-4vnniw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512er'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512pf'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Opteron_G4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fma4'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xop'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Opteron_G4-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fma4'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xop'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Opteron_G5'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fma4'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tbm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xop'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Opteron_G5-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fma4'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tbm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xop'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='SapphireRapids'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='SapphireRapids-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='SapphireRapids-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='SapphireRapids-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='amx-tile'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-bf16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-fp16'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512-vpopcntdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bitalg'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vbmi2'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrc'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fzrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='la57'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='taa-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='tsx-ldtrk'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xfd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='SierraForest'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-ne-convert'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cmpccxadd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mcdt-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pbrsb-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='SierraForest-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-ifma'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-ne-convert'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx-vnni-int8'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='bus-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cmpccxadd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fbsdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='fsrs'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ibrs-all'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mcdt-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pbrsb-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='psdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='sbdr-ssdp-no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='serialize'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vaes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='vpclmulqdq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Client-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='hle'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='rtm'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Skylake-Server-v5'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512bw'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512cd'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512dq'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512f'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='avx512vl'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='invpcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pcid'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='pku'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Snowridge'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='core-capability'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mpx'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='split-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Snowridge-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='core-capability'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='mpx'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='split-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Snowridge-v2'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='core-capability'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='split-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Snowridge-v3'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='core-capability'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='split-lock-detect'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='Snowridge-v4'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='cldemote'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='erms'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='gfni'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdir64b'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='movdiri'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='xsaves'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='athlon'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnow'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnowext'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='athlon-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnow'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnowext'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='core2duo'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='core2duo-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='coreduo'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='coreduo-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='n270'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='n270-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='ss'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='phenom'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnow'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnowext'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <blockers model='phenom-v1'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnow'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <feature name='3dnowext'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </blockers>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </mode>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   </cpu>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <memoryBacking supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <enum name='sourceType'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <value>file</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <value>anonymous</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <value>memfd</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   </memoryBacking>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <devices>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <disk supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='diskDevice'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>disk</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>cdrom</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>floppy</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>lun</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='bus'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>ide</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>fdc</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>scsi</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>usb</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>sata</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='model'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio-transitional</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio-non-transitional</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <graphics supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='type'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>vnc</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>egl-headless</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>dbus</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </graphics>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <video supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='modelType'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>vga</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>cirrus</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>none</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>bochs</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>ramfb</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </video>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <hostdev supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='mode'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>subsystem</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='startupPolicy'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>default</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>mandatory</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>requisite</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>optional</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='subsysType'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>usb</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>pci</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>scsi</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='capsType'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='pciBackend'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </hostdev>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <rng supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='model'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio-transitional</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtio-non-transitional</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='backendModel'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>random</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>egd</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>builtin</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </rng>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <filesystem supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='driverType'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>path</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>handle</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>virtiofs</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </filesystem>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <tpm supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='model'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>tpm-tis</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>tpm-crb</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='backendModel'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>emulator</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>external</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='backendVersion'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>2.0</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </tpm>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <redirdev supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='bus'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>usb</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </redirdev>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <channel supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='type'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>pty</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>unix</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </channel>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <crypto supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='model'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='type'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>qemu</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='backendModel'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>builtin</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </crypto>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <interface supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='backendType'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>default</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>passt</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </interface>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <panic supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='model'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>isa</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>hyperv</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </panic>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <console supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='type'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>null</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>vc</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>pty</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>dev</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>file</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>pipe</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>stdio</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>udp</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>tcp</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>unix</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>qemu-vdagent</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>dbus</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </console>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   </devices>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   <features>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <gic supported='no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <vmcoreinfo supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <genid supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <backingStoreInput supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <backup supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <async-teardown supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <ps2 supported='yes'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <sev supported='no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <sgx supported='no'/>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <hyperv supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='features'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>relaxed</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>vapic</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>spinlocks</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>vpindex</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>runtime</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>synic</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>stimer</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>reset</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>vendor_id</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>frequencies</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>reenlightenment</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>tlbflush</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>ipi</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>avic</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>emsr_bitmap</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>xmm_input</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <defaults>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <spinlocks>4095</spinlocks>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <stimer_direct>on</stimer_direct>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <tlbflush_direct>on</tlbflush_direct>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <tlbflush_extended>on</tlbflush_extended>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </defaults>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </hyperv>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     <launchSecurity supported='yes'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       <enum name='sectype'>
Nov 23 21:01:56 compute-1 nova_compute[230183]:         <value>tdx</value>
Nov 23 21:01:56 compute-1 nova_compute[230183]:       </enum>
Nov 23 21:01:56 compute-1 nova_compute[230183]:     </launchSecurity>
Nov 23 21:01:56 compute-1 nova_compute[230183]:   </features>
Nov 23 21:01:56 compute-1 nova_compute[230183]: </domainCapabilities>
Nov 23 21:01:56 compute-1 nova_compute[230183]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.347 230187 DEBUG nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.348 230187 INFO nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Secure Boot support detected
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.350 230187 INFO nova.virt.libvirt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.350 230187 INFO nova.virt.libvirt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.359 230187 DEBUG nova.virt.libvirt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.388 230187 INFO nova.virt.node [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Determined node identity bb217351-d4c8-44a4-9137-08393a1f72bc from /var/lib/nova/compute_id
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.420 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Verified node bb217351-d4c8-44a4-9137-08393a1f72bc matches my host compute-1.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.443 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 23 21:01:56 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:01:56 compute-1 ceph-mon[80135]: pgmap v607: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.522 230187 ERROR nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Could not retrieve compute node resource provider bb217351-d4c8-44a4-9137-08393a1f72bc and therefore unable to error out any instances stuck in BUILDING state. Error: Failed to retrieve allocations for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider 'bb217351-d4c8-44a4-9137-08393a1f72bc' not found: No resource provider with uuid bb217351-d4c8-44a4-9137-08393a1f72bc found  ", "request_id": "req-22c03036-ef8f-4b6b-9a46-4b12f6bd7418"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider 'bb217351-d4c8-44a4-9137-08393a1f72bc' not found: No resource provider with uuid bb217351-d4c8-44a4-9137-08393a1f72bc found  ", "request_id": "req-22c03036-ef8f-4b6b-9a46-4b12f6bd7418"}]}
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.549 230187 DEBUG oslo_concurrency.lockutils [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.550 230187 DEBUG oslo_concurrency.lockutils [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.550 230187 DEBUG oslo_concurrency.lockutils [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.550 230187 DEBUG nova.compute.resource_tracker [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.551 230187 DEBUG oslo_concurrency.processutils [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:01:56 compute-1 python3.9[230374]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 23 21:01:56 compute-1 systemd[1]: Started libpod-conmon-3eab058616580740aadc24acbbd43c84853a46eb879fdefff975864a15415e9c.scope.
Nov 23 21:01:56 compute-1 systemd[1]: Started libcrun container.
Nov 23 21:01:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ed1cbac57cda04229a5b7b3556732e004a25d45ef3871dc202ce4359ec7436f/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 23 21:01:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ed1cbac57cda04229a5b7b3556732e004a25d45ef3871dc202ce4359ec7436f/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 23 21:01:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ed1cbac57cda04229a5b7b3556732e004a25d45ef3871dc202ce4359ec7436f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 23 21:01:56 compute-1 podman[230418]: 2025-11-23 21:01:56.803058611 +0000 UTC m=+0.116392347 container init 3eab058616580740aadc24acbbd43c84853a46eb879fdefff975864a15415e9c (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 21:01:56 compute-1 podman[230418]: 2025-11-23 21:01:56.811058654 +0000 UTC m=+0.124392370 container start 3eab058616580740aadc24acbbd43c84853a46eb879fdefff975864a15415e9c (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.schema-version=1.0)
Nov 23 21:01:56 compute-1 python3.9[230374]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 23 21:01:56 compute-1 nova_compute_init[230438]: INFO:nova_statedir:Applying nova statedir ownership
Nov 23 21:01:56 compute-1 nova_compute_init[230438]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 23 21:01:56 compute-1 nova_compute_init[230438]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 23 21:01:56 compute-1 nova_compute_init[230438]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 23 21:01:56 compute-1 nova_compute_init[230438]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 23 21:01:56 compute-1 nova_compute_init[230438]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 23 21:01:56 compute-1 nova_compute_init[230438]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 23 21:01:56 compute-1 nova_compute_init[230438]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 23 21:01:56 compute-1 nova_compute_init[230438]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 23 21:01:56 compute-1 nova_compute_init[230438]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 23 21:01:56 compute-1 nova_compute_init[230438]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 23 21:01:56 compute-1 nova_compute_init[230438]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 23 21:01:56 compute-1 nova_compute_init[230438]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 23 21:01:56 compute-1 nova_compute_init[230438]: INFO:nova_statedir:Nova statedir ownership complete
Nov 23 21:01:56 compute-1 systemd[1]: libpod-3eab058616580740aadc24acbbd43c84853a46eb879fdefff975864a15415e9c.scope: Deactivated successfully.
Nov 23 21:01:56 compute-1 rsyslogd[1004]: imjournal from <np0005532762:nova_compute>: begin to drop messages due to rate-limiting
Nov 23 21:01:56 compute-1 podman[230439]: 2025-11-23 21:01:56.884342154 +0000 UTC m=+0.034012296 container died 3eab058616580740aadc24acbbd43c84853a46eb879fdefff975864a15415e9c (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Nov 23 21:01:56 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:01:56 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2689470220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:01:56 compute-1 nova_compute[230183]: 2025-11-23 21:01:56.982 230187 DEBUG oslo_concurrency.processutils [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:01:57 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3eab058616580740aadc24acbbd43c84853a46eb879fdefff975864a15415e9c-userdata-shm.mount: Deactivated successfully.
Nov 23 21:01:57 compute-1 systemd[1]: var-lib-containers-storage-overlay-7ed1cbac57cda04229a5b7b3556732e004a25d45ef3871dc202ce4359ec7436f-merged.mount: Deactivated successfully.
Nov 23 21:01:57 compute-1 podman[230450]: 2025-11-23 21:01:57.059892534 +0000 UTC m=+0.175418347 container cleanup 3eab058616580740aadc24acbbd43c84853a46eb879fdefff975864a15415e9c (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init)
Nov 23 21:01:57 compute-1 systemd[1]: libpod-conmon-3eab058616580740aadc24acbbd43c84853a46eb879fdefff975864a15415e9c.scope: Deactivated successfully.
Nov 23 21:01:57 compute-1 sudo[230372]: pam_unix(sudo:session): session closed for user root
Nov 23 21:01:57 compute-1 nova_compute[230183]: 2025-11-23 21:01:57.141 230187 WARNING nova.virt.libvirt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:01:57 compute-1 nova_compute[230183]: 2025-11-23 21:01:57.142 230187 DEBUG nova.compute.resource_tracker [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5248MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:01:57 compute-1 nova_compute[230183]: 2025-11-23 21:01:57.142 230187 DEBUG oslo_concurrency.lockutils [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:01:57 compute-1 nova_compute[230183]: 2025-11-23 21:01:57.142 230187 DEBUG oslo_concurrency.lockutils [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:01:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:01:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:57.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:01:57 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2689470220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:01:57 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3363672488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:01:57 compute-1 nova_compute[230183]: 2025-11-23 21:01:57.773 230187 ERROR nova.compute.resource_tracker [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Skipping removal of allocations for deleted instances: Failed to retrieve allocations for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider 'bb217351-d4c8-44a4-9137-08393a1f72bc' not found: No resource provider with uuid bb217351-d4c8-44a4-9137-08393a1f72bc found  ", "request_id": "req-07105696-8ed9-4395-abc2-d33d0c044875"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider 'bb217351-d4c8-44a4-9137-08393a1f72bc' not found: No resource provider with uuid bb217351-d4c8-44a4-9137-08393a1f72bc found  ", "request_id": "req-07105696-8ed9-4395-abc2-d33d0c044875"}]}
Nov 23 21:01:57 compute-1 nova_compute[230183]: 2025-11-23 21:01:57.774 230187 DEBUG nova.compute.resource_tracker [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:01:57 compute-1 nova_compute[230183]: 2025-11-23 21:01:57.774 230187 DEBUG nova.compute.resource_tracker [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:01:57 compute-1 sshd-session[200632]: Connection closed by 192.168.122.30 port 37818
Nov 23 21:01:57 compute-1 sshd-session[200628]: pam_unix(sshd:session): session closed for user zuul
Nov 23 21:01:57 compute-1 systemd[1]: session-53.scope: Deactivated successfully.
Nov 23 21:01:57 compute-1 systemd[1]: session-53.scope: Consumed 2min 15.365s CPU time.
Nov 23 21:01:57 compute-1 systemd-logind[793]: Session 53 logged out. Waiting for processes to exit.
Nov 23 21:01:57 compute-1 systemd-logind[793]: Removed session 53.
Nov 23 21:01:57 compute-1 nova_compute[230183]: 2025-11-23 21:01:57.861 230187 INFO nova.scheduler.client.report [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [req-c9ae2ac4-9e14-4189-a603-20283ca9b070] Created resource provider record via placement API for resource provider with UUID bb217351-d4c8-44a4-9137-08393a1f72bc and name compute-1.ctlplane.example.com.
Nov 23 21:01:57 compute-1 nova_compute[230183]: 2025-11-23 21:01:57.885 230187 DEBUG oslo_concurrency.processutils [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:01:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:01:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:58.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:01:58 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:01:58 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2804441142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:01:58 compute-1 nova_compute[230183]: 2025-11-23 21:01:58.346 230187 DEBUG oslo_concurrency.processutils [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:01:58 compute-1 nova_compute[230183]: 2025-11-23 21:01:58.352 230187 DEBUG nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 23 21:01:58 compute-1 nova_compute[230183]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Nov 23 21:01:58 compute-1 nova_compute[230183]: 2025-11-23 21:01:58.352 230187 INFO nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] kernel doesn't support AMD SEV
Nov 23 21:01:58 compute-1 nova_compute[230183]: 2025-11-23 21:01:58.353 230187 DEBUG nova.compute.provider_tree [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Updating inventory in ProviderTree for provider bb217351-d4c8-44a4-9137-08393a1f72bc with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 23 21:01:58 compute-1 nova_compute[230183]: 2025-11-23 21:01:58.353 230187 DEBUG nova.virt.libvirt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 23 21:01:58 compute-1 nova_compute[230183]: 2025-11-23 21:01:58.419 230187 DEBUG nova.scheduler.client.report [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Updated inventory for provider bb217351-d4c8-44a4-9137-08393a1f72bc with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 23 21:01:58 compute-1 nova_compute[230183]: 2025-11-23 21:01:58.420 230187 DEBUG nova.compute.provider_tree [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Updating resource provider bb217351-d4c8-44a4-9137-08393a1f72bc generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 23 21:01:58 compute-1 nova_compute[230183]: 2025-11-23 21:01:58.421 230187 DEBUG nova.compute.provider_tree [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Updating inventory in ProviderTree for provider bb217351-d4c8-44a4-9137-08393a1f72bc with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 23 21:01:58 compute-1 nova_compute[230183]: 2025-11-23 21:01:58.507 230187 DEBUG nova.compute.provider_tree [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Updating resource provider bb217351-d4c8-44a4-9137-08393a1f72bc generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 23 21:01:58 compute-1 nova_compute[230183]: 2025-11-23 21:01:58.525 230187 DEBUG nova.compute.resource_tracker [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 23 21:01:58 compute-1 nova_compute[230183]: 2025-11-23 21:01:58.525 230187 DEBUG oslo_concurrency.lockutils [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.383s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:01:58 compute-1 nova_compute[230183]: 2025-11-23 21:01:58.525 230187 DEBUG nova.service [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Nov 23 21:01:58 compute-1 ceph-mon[80135]: pgmap v608: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 21:01:58 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/4209853945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:01:58 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2804441142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:01:58 compute-1 nova_compute[230183]: 2025-11-23 21:01:58.567 230187 DEBUG nova.service [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Nov 23 21:01:58 compute-1 nova_compute[230183]: 2025-11-23 21:01:58.568 230187 DEBUG nova.servicegroup.drivers.db [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] DB_Driver: join new ServiceGroup member compute-1.ctlplane.example.com to the compute group, service = <Service: host=compute-1.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Nov 23 21:01:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:01:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:01:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:59.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:00.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:00 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:02:00 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/602735030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:02:00 compute-1 ceph-mon[80135]: pgmap v609: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 21:02:00 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3547933507' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:02:00 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/602735030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:02:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:01.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:01 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:02:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:01 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 21:02:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:01 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 21:02:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:01 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 21:02:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:01 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 21:02:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:01 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 21:02:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:01 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 21:02:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:01 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 21:02:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:01 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 21:02:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:01 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 21:02:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:01 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 21:02:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:01 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 21:02:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:01 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 21:02:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:01 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 21:02:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:01 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 21:02:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:01 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 21:02:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:01 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 21:02:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:01 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 21:02:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:01 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 21:02:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:01 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 21:02:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:01 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 21:02:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:01 : epoch 6923763b : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 21:02:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:01 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 21:02:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:01 : epoch 6923763b : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 21:02:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:01 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 21:02:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:01 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 21:02:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:01 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 21:02:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:01 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 21:02:01 compute-1 sudo[230542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:02:01 compute-1 sudo[230542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:02:01 compute-1 sudo[230542]: pam_unix(sudo:session): session closed for user root
Nov 23 21:02:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:02 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f564c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:02:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:02.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:02:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:02 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56340016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:02 compute-1 ceph-mon[80135]: pgmap v610: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 21:02:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:02 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5624000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:02:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:03.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:02:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:02:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:04 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f561c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:02:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:04.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:02:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210204 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 21:02:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:04 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56340016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:04 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56340016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:05.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:05 compute-1 ceph-mon[80135]: pgmap v611: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 21:02:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:06 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56240016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:02:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:06.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:02:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:06 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f561c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:06 compute-1 ceph-mon[80135]: pgmap v612: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 21:02:06 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:02:06 compute-1 podman[230571]: 2025-11-23 21:02:06.685090079 +0000 UTC m=+0.087185410 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 21:02:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:06 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56280012e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:06 compute-1 podman[230590]: 2025-11-23 21:02:06.824525448 +0000 UTC m=+0.101937313 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 23 21:02:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:07.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:08 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56340016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:02:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:08.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:02:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:08 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56240016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:08 compute-1 ceph-mon[80135]: pgmap v613: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 21:02:08 compute-1 nova_compute[230183]: 2025-11-23 21:02:08.570 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:02:08 compute-1 nova_compute[230183]: 2025-11-23 21:02:08.587 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:02:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:08 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f561c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:02:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:09.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:02:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210210 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 21:02:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:10 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5628001e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:02:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:10.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:02:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:10 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56340016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:10 compute-1 ceph-mon[80135]: pgmap v614: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 21:02:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:10 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56240016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:11.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:11 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:02:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:12 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f561c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:12.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:12 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5628001e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:12 compute-1 ceph-mon[80135]: pgmap v615: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 21:02:12 compute-1 podman[230619]: 2025-11-23 21:02:12.66313806 +0000 UTC m=+0.080511343 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Nov 23 21:02:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:12 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56340016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:13.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:14 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5624002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:14.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:14 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f561c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:14 compute-1 ceph-mon[80135]: pgmap v616: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 23 21:02:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:14 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5628001e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:15.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:16 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56340016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:16.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:16 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5624002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:16 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:02:16 compute-1 ceph-mon[80135]: pgmap v617: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 21:02:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:16 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f561c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:17.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:18 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5628003290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:18.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:18 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56340016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:18 compute-1 ceph-mon[80135]: pgmap v618: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 21:02:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:02:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:18 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5624002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:19.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:19 : epoch 6923763b : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 21:02:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:20 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f561c002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:20.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:20 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5628003290 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:20 compute-1 ceph-mon[80135]: pgmap v619: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 21:02:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:20 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56340037a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:21.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:21 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:02:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:22 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5624003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:22 compute-1 sudo[230646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:02:22 compute-1 sudo[230646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:02:22 compute-1 sudo[230646]: pam_unix(sudo:session): session closed for user root
Nov 23 21:02:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:02:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:22.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:02:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:22 : epoch 6923763b : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 21:02:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:22 : epoch 6923763b : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 21:02:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:22 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5624003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:22 compute-1 ceph-mon[80135]: pgmap v620: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Nov 23 21:02:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:22 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f561c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:23.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:23 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 21:02:23 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1283665357' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:02:23 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 21:02:23 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1283665357' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:02:23 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/1059527712' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:02:23 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/1059527712' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:02:23 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/1283665357' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:02:23 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/1283665357' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:02:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:24 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5624003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:02:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:24.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:02:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:24 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5628003290 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:24 compute-1 ceph-mon[80135]: pgmap v621: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Nov 23 21:02:24 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/156012811' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:02:24 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/156012811' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:02:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:24 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56340037a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:25.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:25 : epoch 6923763b : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 21:02:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:26 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56340037a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:02:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:26.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:02:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:26 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56340037a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:26 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:02:26 compute-1 ceph-mon[80135]: pgmap v622: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 21:02:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:26 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5628004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:02:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:27.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:02:27 compute-1 ceph-mon[80135]: pgmap v623: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 21:02:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:28 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5624003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:28.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:28 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5624003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:28 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56340037a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:29.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:30 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5628004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:30 compute-1 ceph-mon[80135]: pgmap v624: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 21:02:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:30.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:30 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5628004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:30 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5624003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:31.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:31 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:02:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210232 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 21:02:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:32 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5648001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:32 compute-1 ceph-mon[80135]: pgmap v625: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 21:02:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:32.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:32 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f561c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:32 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5628004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:33.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:34 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:34 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5624003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:34 compute-1 ceph-mon[80135]: pgmap v626: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 852 B/s wr, 2 op/s
Nov 23 21:02:34 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:02:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:02:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:34.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:02:34 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:34 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56480025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:34 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:34 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f563c000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:35.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:36 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5628004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:36 compute-1 ceph-mon[80135]: pgmap v627: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 853 B/s wr, 2 op/s
Nov 23 21:02:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:02:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:36.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:02:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:36 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5624003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:36 compute-1 sshd-session[230679]: Invalid user solv from 161.35.133.66 port 48650
Nov 23 21:02:36 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:02:36 compute-1 sshd-session[230679]: Connection closed by invalid user solv 161.35.133.66 port 48650 [preauth]
Nov 23 21:02:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:36 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56480025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:02:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:37.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:02:37 compute-1 podman[230683]: 2025-11-23 21:02:37.661330594 +0000 UTC m=+0.063514120 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 23 21:02:37 compute-1 podman[230682]: 2025-11-23 21:02:37.701849732 +0000 UTC m=+0.106554816 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 23 21:02:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:38 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f563c001930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:38.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:38 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5628004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:38 compute-1 ceph-mon[80135]: pgmap v628: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 21:02:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:38 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5624003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:39.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:40 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56480032d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:02:40 compute-1 ceph-mon[80135]: pgmap v629: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 21:02:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:02:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:40.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:02:40 compute-1 kernel: ganesha.nfsd[230530]: segfault at 50 ip 00007f56fd9b632e sp 00007f56ccff8210 error 4 in libntirpc.so.5.8[7f56fd99b000+2c000] likely on CPU 7 (core 0, socket 7)
Nov 23 21:02:40 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 21:02:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:02:40 : epoch 6923763b : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56480032d0 fd 48 proxy ignored for local
Nov 23 21:02:40 compute-1 systemd[1]: Started Process Core Dump (PID 230726/UID 0).
Nov 23 21:02:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:41.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:41 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:02:42 compute-1 systemd-coredump[230727]: Process 229189 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 42:
                                                    #0  0x00007f56fd9b632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 23 21:02:42 compute-1 ceph-mon[80135]: pgmap v630: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 21:02:42 compute-1 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 21:02:42 compute-1 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 21:02:42 compute-1 sudo[230731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:02:42 compute-1 sudo[230731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:02:42 compute-1 sudo[230731]: pam_unix(sudo:session): session closed for user root
Nov 23 21:02:42 compute-1 systemd[1]: systemd-coredump@8-230726-0.service: Deactivated successfully.
Nov 23 21:02:42 compute-1 systemd[1]: systemd-coredump@8-230726-0.service: Consumed 1.281s CPU time.
Nov 23 21:02:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:42.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:42 compute-1 podman[230759]: 2025-11-23 21:02:42.25430849 +0000 UTC m=+0.026494816 container died a20cc2100a0ce143f194bbe51ab7e7ee427f407c69a4b8a256f1b12ed5026683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Nov 23 21:02:42 compute-1 systemd[1]: var-lib-containers-storage-overlay-ced5f9ab2d01d0012c20a9d3e4190fdc56bf8f17f77de53f364aa847543f0855-merged.mount: Deactivated successfully.
Nov 23 21:02:42 compute-1 podman[230759]: 2025-11-23 21:02:42.301814513 +0000 UTC m=+0.074000839 container remove a20cc2100a0ce143f194bbe51ab7e7ee427f407c69a4b8a256f1b12ed5026683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 21:02:42 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Main process exited, code=exited, status=139/n/a
Nov 23 21:02:42 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Failed with result 'exit-code'.
Nov 23 21:02:42 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.570s CPU time.
Nov 23 21:02:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:02:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:43.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:02:43 compute-1 podman[230802]: 2025-11-23 21:02:43.678802182 +0000 UTC m=+0.079268030 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 21:02:44 compute-1 ceph-mon[80135]: pgmap v631: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:02:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:02:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:44.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:02:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:45.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:46 compute-1 ceph-mon[80135]: pgmap v632: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:02:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:02:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:46.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:02:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210246 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 21:02:46 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:02:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:47.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:47 compute-1 sudo[230825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:02:47 compute-1 sudo[230825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:02:47 compute-1 sudo[230825]: pam_unix(sudo:session): session closed for user root
Nov 23 21:02:47 compute-1 sudo[230850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 21:02:47 compute-1 sudo[230850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:02:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:48.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:48 compute-1 ceph-mon[80135]: pgmap v633: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:02:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:02:48 compute-1 sudo[230850]: pam_unix(sudo:session): session closed for user root
Nov 23 21:02:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:49.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:49 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:02:49 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 21:02:49 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:02:49 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:02:49 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 21:02:49 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 21:02:49 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:02:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:02:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:50.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:02:50 compute-1 ceph-mon[80135]: pgmap v634: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:02:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:02:51.060 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:02:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:02:51.060 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:02:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:02:51.060 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:02:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:02:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:51.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:02:51 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:02:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:52.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:52 compute-1 ceph-mon[80135]: pgmap v635: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 0 B/s wr, 152 op/s
Nov 23 21:02:52 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Scheduled restart job, restart counter is at 9.
Nov 23 21:02:52 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 21:02:52 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.570s CPU time.
Nov 23 21:02:52 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 21:02:52 compute-1 podman[230952]: 2025-11-23 21:02:52.766015627 +0000 UTC m=+0.042790789 container create 5f2bb77731781714ca2bb677abc38580fbcf9262dee8e9de7400628ca8495195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 21:02:52 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a83b7eb3f5afeb4b96d0cc4d64fcb7e4fecc6be41db13f731cf53b24c499dfe1/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 21:02:52 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a83b7eb3f5afeb4b96d0cc4d64fcb7e4fecc6be41db13f731cf53b24c499dfe1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 21:02:52 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a83b7eb3f5afeb4b96d0cc4d64fcb7e4fecc6be41db13f731cf53b24c499dfe1/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 21:02:52 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a83b7eb3f5afeb4b96d0cc4d64fcb7e4fecc6be41db13f731cf53b24c499dfe1/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.fuxuha-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 21:02:52 compute-1 podman[230952]: 2025-11-23 21:02:52.81385779 +0000 UTC m=+0.090632972 container init 5f2bb77731781714ca2bb677abc38580fbcf9262dee8e9de7400628ca8495195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 23 21:02:52 compute-1 podman[230952]: 2025-11-23 21:02:52.820016503 +0000 UTC m=+0.096791665 container start 5f2bb77731781714ca2bb677abc38580fbcf9262dee8e9de7400628ca8495195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 21:02:52 compute-1 bash[230952]: 5f2bb77731781714ca2bb677abc38580fbcf9262dee8e9de7400628ca8495195
Nov 23 21:02:52 compute-1 podman[230952]: 2025-11-23 21:02:52.748983704 +0000 UTC m=+0.025758886 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 21:02:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:02:52 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 21:02:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:02:52 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 21:02:52 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 21:02:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:02:52 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 21:02:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:02:52 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 21:02:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:02:52 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 21:02:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:02:52 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 21:02:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:02:52 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 21:02:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:02:52 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 21:02:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:53.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:53 compute-1 sudo[231010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 21:02:53 compute-1 sudo[231010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:02:53 compute-1 sudo[231010]: pam_unix(sudo:session): session closed for user root
Nov 23 21:02:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:54.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:54 compute-1 ceph-mon[80135]: pgmap v636: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 0 B/s wr, 152 op/s
Nov 23 21:02:54 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:02:54 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:02:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:02:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:55.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:02:55 compute-1 nova_compute[230183]: 2025-11-23 21:02:55.428 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:02:55 compute-1 nova_compute[230183]: 2025-11-23 21:02:55.429 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:02:55 compute-1 nova_compute[230183]: 2025-11-23 21:02:55.429 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 23 21:02:55 compute-1 nova_compute[230183]: 2025-11-23 21:02:55.429 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 23 21:02:55 compute-1 nova_compute[230183]: 2025-11-23 21:02:55.462 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 23 21:02:55 compute-1 nova_compute[230183]: 2025-11-23 21:02:55.462 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:02:55 compute-1 nova_compute[230183]: 2025-11-23 21:02:55.462 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:02:55 compute-1 nova_compute[230183]: 2025-11-23 21:02:55.463 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:02:55 compute-1 nova_compute[230183]: 2025-11-23 21:02:55.463 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:02:55 compute-1 nova_compute[230183]: 2025-11-23 21:02:55.463 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:02:55 compute-1 nova_compute[230183]: 2025-11-23 21:02:55.463 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:02:55 compute-1 nova_compute[230183]: 2025-11-23 21:02:55.464 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 23 21:02:55 compute-1 nova_compute[230183]: 2025-11-23 21:02:55.464 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:02:55 compute-1 nova_compute[230183]: 2025-11-23 21:02:55.504 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:02:55 compute-1 nova_compute[230183]: 2025-11-23 21:02:55.504 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:02:55 compute-1 nova_compute[230183]: 2025-11-23 21:02:55.505 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:02:55 compute-1 nova_compute[230183]: 2025-11-23 21:02:55.505 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:02:55 compute-1 nova_compute[230183]: 2025-11-23 21:02:55.505 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:02:55 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:02:55 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2548086481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:02:55 compute-1 nova_compute[230183]: 2025-11-23 21:02:55.929 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:02:56 compute-1 nova_compute[230183]: 2025-11-23 21:02:56.090 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:02:56 compute-1 nova_compute[230183]: 2025-11-23 21:02:56.091 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5251MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:02:56 compute-1 nova_compute[230183]: 2025-11-23 21:02:56.091 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:02:56 compute-1 nova_compute[230183]: 2025-11-23 21:02:56.092 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:02:56 compute-1 nova_compute[230183]: 2025-11-23 21:02:56.258 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:02:56 compute-1 nova_compute[230183]: 2025-11-23 21:02:56.258 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:02:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:56.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:56 compute-1 ceph-mon[80135]: pgmap v637: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 85 B/s wr, 152 op/s
Nov 23 21:02:56 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2548086481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:02:56 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3108314033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:02:56 compute-1 nova_compute[230183]: 2025-11-23 21:02:56.299 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:02:56 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:02:56 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:02:56 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/324191540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:02:56 compute-1 nova_compute[230183]: 2025-11-23 21:02:56.728 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:02:56 compute-1 nova_compute[230183]: 2025-11-23 21:02:56.733 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:02:56 compute-1 nova_compute[230183]: 2025-11-23 21:02:56.755 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:02:56 compute-1 nova_compute[230183]: 2025-11-23 21:02:56.757 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 23 21:02:56 compute-1 nova_compute[230183]: 2025-11-23 21:02:56.757 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:02:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:57.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:57 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/324191540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:02:57 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1061901012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:02:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:02:58.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:58 compute-1 ceph-mon[80135]: pgmap v638: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 85 B/s wr, 152 op/s
Nov 23 21:02:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:02:58 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 21:02:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:02:58 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 21:02:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:02:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:02:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:02:59.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:02:59 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2872003360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:03:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:00.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:00 compute-1 ceph-mon[80135]: pgmap v639: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 85 B/s wr, 152 op/s
Nov 23 21:03:00 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2397179053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:03:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:01.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:01 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:03:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:02.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:02 compute-1 sudo[231084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:03:02 compute-1 sudo[231084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:03:02 compute-1 sudo[231084]: pam_unix(sudo:session): session closed for user root
Nov 23 21:03:02 compute-1 ceph-mon[80135]: pgmap v640: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 938 B/s wr, 155 op/s
Nov 23 21:03:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:03.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:03:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:03:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:04.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:03:04 compute-1 sshd-session[231110]: Invalid user delegate from 92.118.39.92 port 45430
Nov 23 21:03:04 compute-1 ceph-mon[80135]: pgmap v641: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 21:03:04 compute-1 sshd-session[231110]: Connection closed by invalid user delegate 92.118.39.92 port 45430 [preauth]
Nov 23 21:03:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:04 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 21:03:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:04 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 21:03:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:04 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 21:03:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:04 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 21:03:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:04 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 21:03:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:04 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 21:03:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:04 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 21:03:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:04 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 21:03:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:04 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 21:03:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:04 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 21:03:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:05 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 21:03:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:05 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 21:03:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:05 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 21:03:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:05 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 21:03:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:05 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 21:03:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:05 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 21:03:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:05 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 21:03:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:05 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 21:03:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:05 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 21:03:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:05 : epoch 6923767c : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 21:03:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:05 : epoch 6923767c : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 21:03:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:05 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 21:03:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:05 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 21:03:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:05 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 21:03:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:05 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 21:03:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:05 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 21:03:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:05 : epoch 6923767c : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 21:03:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:05.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:06 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:06.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:06 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c0013b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:06 compute-1 ceph-mon[80135]: pgmap v642: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 21:03:06 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:03:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:06 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d600016c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:03:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:07.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:03:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:08 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:08.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:08 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210308 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 21:03:08 compute-1 ceph-mon[80135]: pgmap v643: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 21:03:08 compute-1 podman[231131]: 2025-11-23 21:03:08.655724643 +0000 UTC m=+0.061228570 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 23 21:03:08 compute-1 podman[231130]: 2025-11-23 21:03:08.707749306 +0000 UTC m=+0.110890421 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 23 21:03:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:08 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c002090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:08 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Nov 23 21:03:08 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1952705122' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 23 21:03:09 compute-1 rsyslogd[1004]: imjournal: 1295 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 23 21:03:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:09.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:09 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/362349620' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 23 21:03:09 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/1952705122' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 23 21:03:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210310 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 21:03:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:10 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d600021c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:10.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:10 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:10 compute-1 ceph-mon[80135]: from='client.24529 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 23 21:03:10 compute-1 ceph-mon[80135]: from='client.24692 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 23 21:03:10 compute-1 ceph-mon[80135]: pgmap v644: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 21:03:10 compute-1 ceph-mon[80135]: from='client.24529 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Nov 23 21:03:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:10 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:11.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:11 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:03:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:12 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c002090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:03:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:12.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:03:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:12 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d600021c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:12 compute-1 ceph-mon[80135]: pgmap v645: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 21:03:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:12 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:03:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:13.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:03:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:14 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:03:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:14.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:03:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:14 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c002090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:14 compute-1 ceph-mon[80135]: pgmap v646: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 21:03:14 compute-1 podman[231177]: 2025-11-23 21:03:14.648770851 +0000 UTC m=+0.056506545 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0)
Nov 23 21:03:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:14 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d60002ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:15.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:16 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:03:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:16.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:03:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:16 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d700091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:16 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:03:16 compute-1 ceph-mon[80135]: pgmap v647: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Nov 23 21:03:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:16 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:17.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:18 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d60002ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:03:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:18.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:03:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:18 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:18 compute-1 ceph-mon[80135]: pgmap v648: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 23 21:03:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:03:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:18 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d700091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:18 : epoch 6923767c : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 21:03:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:19.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:20 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:20 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d60002ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:03:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:20.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:03:20 compute-1 ceph-mon[80135]: pgmap v649: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 23 21:03:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:20 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:21.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:21 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:03:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:21 : epoch 6923767c : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 21:03:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:21 : epoch 6923767c : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 21:03:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:22 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:22 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:22.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:22 compute-1 sudo[231201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:03:22 compute-1 sudo[231201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:03:22 compute-1 sudo[231201]: pam_unix(sudo:session): session closed for user root
Nov 23 21:03:22 compute-1 ceph-mon[80135]: pgmap v650: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Nov 23 21:03:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:22 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d60002ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:23.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:23 compute-1 ceph-mon[80135]: pgmap v651: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 21:03:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:24 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:24 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:03:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:24.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:03:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:24 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:24 : epoch 6923767c : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 21:03:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:25.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:26 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d60002ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:26 compute-1 ceph-mon[80135]: pgmap v652: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 21:03:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:26 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:03:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:26.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:03:26 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:03:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:26 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:03:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:27.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:03:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:28 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:28 compute-1 ceph-mon[80135]: pgmap v653: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 21:03:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:28 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d60002ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:28.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:28 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:29 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/3674086929' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 23 21:03:29 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/395877405' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 23 21:03:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:29.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210330 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 21:03:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:30 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:30 compute-1 ceph-mon[80135]: from='client.24556 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 23 21:03:30 compute-1 ceph-mon[80135]: from='client.24553 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 23 21:03:30 compute-1 ceph-mon[80135]: from='client.24553 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Nov 23 21:03:30 compute-1 ceph-mon[80135]: pgmap v654: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 21:03:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:30 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:30.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:30 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d60002ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:31.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:31 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:03:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:32 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:32 compute-1 ceph-mon[80135]: pgmap v655: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 21:03:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:32 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:03:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:32.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:03:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:32 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:03:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:03:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:33.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:03:34 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:34 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d60002ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:34 compute-1 ceph-mon[80135]: pgmap v656: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 21:03:34 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:34 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:03:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:34.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:03:34 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:34 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:35.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:36 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:36 compute-1 ceph-mon[80135]: pgmap v657: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 21:03:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:36 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d60002ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:36.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:36 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:03:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:36 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:37.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:38 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:38 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:03:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:38.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:03:38 compute-1 ceph-mon[80135]: pgmap v658: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 21:03:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:38 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d38000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:39.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:39 compute-1 podman[231237]: 2025-11-23 21:03:39.663631442 +0000 UTC m=+0.081338314 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 21:03:39 compute-1 podman[231238]: 2025-11-23 21:03:39.674807969 +0000 UTC m=+0.085504215 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 21:03:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:40 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:40 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:03:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:40.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:03:40 compute-1 ceph-mon[80135]: pgmap v659: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 21:03:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:40 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:41.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:41 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:03:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:42 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d380016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:42 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:03:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:42.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:03:42 compute-1 sudo[231281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:03:42 compute-1 sudo[231281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:03:42 compute-1 sudo[231281]: pam_unix(sudo:session): session closed for user root
Nov 23 21:03:42 compute-1 ceph-mon[80135]: pgmap v660: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 21:03:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:42 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:43.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:43 compute-1 ceph-mon[80135]: pgmap v661: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:03:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:44 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:44 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d380016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:44.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:44 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:03:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:45.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:03:45 compute-1 podman[231308]: 2025-11-23 21:03:45.672680866 +0000 UTC m=+0.088159527 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Nov 23 21:03:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:46 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:46 compute-1 ceph-mon[80135]: pgmap v662: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:03:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:46 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:03:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:46.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:03:46 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:03:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:46 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d380016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:47.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:48 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:48 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:48 compute-1 ceph-mon[80135]: pgmap v663: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:03:48 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:48 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:03:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:48.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:03:48 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:48 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:49 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:03:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:49.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:50 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d38002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:50 compute-1 ceph-mon[80135]: pgmap v664: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:03:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:50 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:03:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:50.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:03:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:50 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:03:51.060 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:03:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:03:51.060 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:03:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:03:51.060 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:03:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:51.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:51 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:03:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:52 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:52 compute-1 ceph-mon[80135]: pgmap v665: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 23 21:03:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:52 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d38002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:52.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:52 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:53.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:53 compute-1 sudo[231332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:03:53 compute-1 sudo[231332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:03:53 compute-1 sudo[231332]: pam_unix(sudo:session): session closed for user root
Nov 23 21:03:53 compute-1 sudo[231357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 21:03:53 compute-1 sudo[231357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:03:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:54 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:54 compute-1 sudo[231357]: pam_unix(sudo:session): session closed for user root
Nov 23 21:03:54 compute-1 ceph-mon[80135]: pgmap v666: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:03:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:54 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:54.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:54 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d38002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:55 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:03:55 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 21:03:55 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:03:55 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:03:55 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 21:03:55 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 21:03:55 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:03:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:55.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210356 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 21:03:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:56 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:56 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:56.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:56 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:03:56 compute-1 nova_compute[230183]: 2025-11-23 21:03:56.751 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:03:56 compute-1 nova_compute[230183]: 2025-11-23 21:03:56.751 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:03:56 compute-1 nova_compute[230183]: 2025-11-23 21:03:56.767 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:03:56 compute-1 nova_compute[230183]: 2025-11-23 21:03:56.767 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:03:56 compute-1 nova_compute[230183]: 2025-11-23 21:03:56.767 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:03:56 compute-1 nova_compute[230183]: 2025-11-23 21:03:56.767 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:03:56 compute-1 nova_compute[230183]: 2025-11-23 21:03:56.767 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:03:56 compute-1 nova_compute[230183]: 2025-11-23 21:03:56.767 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:03:56 compute-1 nova_compute[230183]: 2025-11-23 21:03:56.768 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 23 21:03:56 compute-1 nova_compute[230183]: 2025-11-23 21:03:56.768 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:03:56 compute-1 nova_compute[230183]: 2025-11-23 21:03:56.788 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:03:56 compute-1 nova_compute[230183]: 2025-11-23 21:03:56.788 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:03:56 compute-1 nova_compute[230183]: 2025-11-23 21:03:56.788 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:03:56 compute-1 nova_compute[230183]: 2025-11-23 21:03:56.788 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:03:56 compute-1 nova_compute[230183]: 2025-11-23 21:03:56.789 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:03:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:56 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d7000a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:57 compute-1 ceph-mon[80135]: pgmap v667: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:03:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:03:57 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1602560503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:03:57 compute-1 nova_compute[230183]: 2025-11-23 21:03:57.257 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:03:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:03:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:57.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:03:57 compute-1 nova_compute[230183]: 2025-11-23 21:03:57.398 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:03:57 compute-1 nova_compute[230183]: 2025-11-23 21:03:57.399 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5265MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:03:57 compute-1 nova_compute[230183]: 2025-11-23 21:03:57.399 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:03:57 compute-1 nova_compute[230183]: 2025-11-23 21:03:57.399 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:03:57 compute-1 nova_compute[230183]: 2025-11-23 21:03:57.454 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:03:57 compute-1 nova_compute[230183]: 2025-11-23 21:03:57.454 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:03:57 compute-1 nova_compute[230183]: 2025-11-23 21:03:57.475 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:03:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:03:57 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3423069219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:03:57 compute-1 nova_compute[230183]: 2025-11-23 21:03:57.941 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:03:57 compute-1 nova_compute[230183]: 2025-11-23 21:03:57.946 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:03:57 compute-1 nova_compute[230183]: 2025-11-23 21:03:57.962 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:03:57 compute-1 nova_compute[230183]: 2025-11-23 21:03:57.964 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 23 21:03:57 compute-1 nova_compute[230183]: 2025-11-23 21:03:57.965 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:03:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:58 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d38003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:03:58 compute-1 ceph-mon[80135]: pgmap v668: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:03:58 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1602560503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:03:58 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/4236263840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:03:58 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3423069219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:03:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:58 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c003c10 fd 39 proxy ignored for local
Nov 23 21:03:58 compute-1 kernel: ganesha.nfsd[231128]: segfault at 50 ip 00007f4e1f52732e sp 00007f4dd77fd210 error 4 in libntirpc.so.5.8[7f4e1f50c000+2c000] likely on CPU 7 (core 0, socket 7)
Nov 23 21:03:58 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 21:03:58 compute-1 systemd[1]: Started Process Core Dump (PID 231459/UID 0).
Nov 23 21:03:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:03:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:58.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:03:58 compute-1 nova_compute[230183]: 2025-11-23 21:03:58.625 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:03:58 compute-1 nova_compute[230183]: 2025-11-23 21:03:58.626 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 23 21:03:58 compute-1 nova_compute[230183]: 2025-11-23 21:03:58.626 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 23 21:03:58 compute-1 nova_compute[230183]: 2025-11-23 21:03:58.642 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 23 21:03:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:03:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:03:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:59.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:03:59 compute-1 systemd-coredump[231460]: Process 230972 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 55:
                                                    #0  0x00007f4e1f52732e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 23 21:03:59 compute-1 systemd[1]: systemd-coredump@9-231459-0.service: Deactivated successfully.
Nov 23 21:03:59 compute-1 systemd[1]: systemd-coredump@9-231459-0.service: Consumed 1.140s CPU time.
Nov 23 21:03:59 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/4197645649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:03:59 compute-1 podman[231466]: 2025-11-23 21:03:59.63584353 +0000 UTC m=+0.030406390 container died 5f2bb77731781714ca2bb677abc38580fbcf9262dee8e9de7400628ca8495195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 21:03:59 compute-1 systemd[1]: var-lib-containers-storage-overlay-a83b7eb3f5afeb4b96d0cc4d64fcb7e4fecc6be41db13f731cf53b24c499dfe1-merged.mount: Deactivated successfully.
Nov 23 21:03:59 compute-1 podman[231466]: 2025-11-23 21:03:59.691016257 +0000 UTC m=+0.085579067 container remove 5f2bb77731781714ca2bb677abc38580fbcf9262dee8e9de7400628ca8495195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 21:03:59 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Main process exited, code=exited, status=139/n/a
Nov 23 21:03:59 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Failed with result 'exit-code'.
Nov 23 21:03:59 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.425s CPU time.
Nov 23 21:04:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:04:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:00.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:04:00 compute-1 ceph-mon[80135]: pgmap v669: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:04:00 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/470100489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:04:00 compute-1 sudo[231509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 21:04:00 compute-1 sudo[231509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:04:00 compute-1 sudo[231509]: pam_unix(sudo:session): session closed for user root
Nov 23 21:04:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:04:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:01.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:04:01 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:04:01 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:04:01 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:04:01 compute-1 ceph-mon[80135]: pgmap v670: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 23 21:04:01 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/54100438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:04:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:02.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:02 compute-1 sudo[231535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:04:02 compute-1 sudo[231535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:04:02 compute-1 sudo[231535]: pam_unix(sudo:session): session closed for user root
Nov 23 21:04:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:03.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:04 compute-1 ceph-mon[80135]: pgmap v671: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 21:04:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:04:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210404 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 21:04:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:04:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:04.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:04:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:04:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:05.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:04:06 compute-1 ceph-mon[80135]: pgmap v672: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 21:04:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:06.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:06 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:04:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:07.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 21:04:07 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2968166939' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:04:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 21:04:07 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2968166939' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:04:08 compute-1 ceph-mon[80135]: pgmap v673: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 21:04:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/2968166939' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:04:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/2968166939' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:04:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:04:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:08.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:04:09 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:04:09.066 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:04:09 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:04:09.067 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 23 21:04:09 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:04:09.069 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:04:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:09.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:10 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Scheduled restart job, restart counter is at 10.
Nov 23 21:04:10 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 21:04:10 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.425s CPU time.
Nov 23 21:04:10 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 21:04:10 compute-1 podman[231566]: 2025-11-23 21:04:10.14180205 +0000 UTC m=+0.051845757 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 23 21:04:10 compute-1 podman[231564]: 2025-11-23 21:04:10.170686438 +0000 UTC m=+0.083150341 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 23 21:04:10 compute-1 podman[231649]: 2025-11-23 21:04:10.304703659 +0000 UTC m=+0.066924064 container create 28e57aa42cd2ce22aa90d15386d4b9c6fbc4c7148e89ce4a602828fc7daf9d0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 23 21:04:10 compute-1 podman[231649]: 2025-11-23 21:04:10.259067399 +0000 UTC m=+0.021287794 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 21:04:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:10.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:10 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8959c0210da2eb362aef70369d12cda156853952a6808f0124f32ae90a3c6af/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 21:04:10 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8959c0210da2eb362aef70369d12cda156853952a6808f0124f32ae90a3c6af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 21:04:10 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8959c0210da2eb362aef70369d12cda156853952a6808f0124f32ae90a3c6af/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 21:04:10 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8959c0210da2eb362aef70369d12cda156853952a6808f0124f32ae90a3c6af/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.fuxuha-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 21:04:10 compute-1 podman[231649]: 2025-11-23 21:04:10.554428107 +0000 UTC m=+0.316648522 container init 28e57aa42cd2ce22aa90d15386d4b9c6fbc4c7148e89ce4a602828fc7daf9d0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 23 21:04:10 compute-1 podman[231649]: 2025-11-23 21:04:10.559303328 +0000 UTC m=+0.321523723 container start 28e57aa42cd2ce22aa90d15386d4b9c6fbc4c7148e89ce4a602828fc7daf9d0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 23 21:04:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:10 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 21:04:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:10 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 21:04:10 compute-1 bash[231649]: 28e57aa42cd2ce22aa90d15386d4b9c6fbc4c7148e89ce4a602828fc7daf9d0c
Nov 23 21:04:10 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 21:04:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:10 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 21:04:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:10 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 21:04:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:10 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 21:04:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:10 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 21:04:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:10 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 21:04:10 compute-1 ceph-mon[80135]: pgmap v674: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 21:04:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:10 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 21:04:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:11.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:11 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:04:11 compute-1 ceph-mon[80135]: pgmap v675: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 21:04:11 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Nov 23 21:04:11 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:11.937519) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 21:04:11 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Nov 23 21:04:11 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931851937580, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2382, "num_deletes": 251, "total_data_size": 6373654, "memory_usage": 6460352, "flush_reason": "Manual Compaction"}
Nov 23 21:04:11 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931852064315, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 4132392, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20850, "largest_seqno": 23226, "table_properties": {"data_size": 4122798, "index_size": 6024, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19855, "raw_average_key_size": 20, "raw_value_size": 4103685, "raw_average_value_size": 4191, "num_data_blocks": 264, "num_entries": 979, "num_filter_entries": 979, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763931630, "oldest_key_time": 1763931630, "file_creation_time": 1763931851, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 126835 microseconds, and 8026 cpu microseconds.
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.064365) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 4132392 bytes OK
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.064385) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.108180) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.108233) EVENT_LOG_v1 {"time_micros": 1763931852108222, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.108259) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 6363185, prev total WAL file size 6363185, number of live WAL files 2.
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.110108) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(4035KB)], [39(12MB)]
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931852110161, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 17511275, "oldest_snapshot_seqno": -1}
Nov 23 21:04:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:04:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:12.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 5440 keys, 15313707 bytes, temperature: kUnknown
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931852461447, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 15313707, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15275021, "index_size": 23984, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 137266, "raw_average_key_size": 25, "raw_value_size": 15174232, "raw_average_value_size": 2789, "num_data_blocks": 991, "num_entries": 5440, "num_filter_entries": 5440, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763931852, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.461753) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 15313707 bytes
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.481389) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 49.8 rd, 43.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 12.8 +0.0 blob) out(14.6 +0.0 blob), read-write-amplify(7.9) write-amplify(3.7) OK, records in: 5960, records dropped: 520 output_compression: NoCompression
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.481425) EVENT_LOG_v1 {"time_micros": 1763931852481411, "job": 22, "event": "compaction_finished", "compaction_time_micros": 351372, "compaction_time_cpu_micros": 26817, "output_level": 6, "num_output_files": 1, "total_output_size": 15313707, "num_input_records": 5960, "num_output_records": 5440, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931852482326, "job": 22, "event": "table_file_deletion", "file_number": 41}
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931852485174, "job": 22, "event": "table_file_deletion", "file_number": 39}
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.110004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.485266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.485273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.485275) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.485277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:04:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.485279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:04:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:13.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:14 compute-1 ceph-mon[80135]: pgmap v676: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 21:04:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:14.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:15.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:16 compute-1 ceph-mon[80135]: pgmap v677: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Nov 23 21:04:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:04:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:16.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:04:16 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:04:16 compute-1 podman[231710]: 2025-11-23 21:04:16.649452587 +0000 UTC m=+0.064719744 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 23 21:04:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:16 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Nov 23 21:04:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:16 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Nov 23 21:04:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:16 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 21:04:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:16 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 21:04:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:16 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 21:04:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:16 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 21:04:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:16 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 21:04:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:16 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 21:04:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:17.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:18 compute-1 ceph-mon[80135]: pgmap v678: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Nov 23 21:04:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:04:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:18.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:19.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210420 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 21:04:20 compute-1 ceph-mon[80135]: pgmap v679: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Nov 23 21:04:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:04:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:20.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:04:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:21.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:21 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:04:22 compute-1 ceph-mon[80135]: pgmap v680: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Nov 23 21:04:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:04:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:22.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:04:22 compute-1 sudo[231734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:04:22 compute-1 sudo[231734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:04:22 compute-1 sudo[231734]: pam_unix(sudo:session): session closed for user root
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000022:nfs.cephfs.0: -2
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 21:04:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fc0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:04:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:23.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:04:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:24 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:24 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:04:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:24.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:04:24 compute-1 ceph-mon[80135]: pgmap v681: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Nov 23 21:04:24 compute-1 sshd-session[231733]: Invalid user test from 186.201.54.90 port 44717
Nov 23 21:04:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:24 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:25 compute-1 sshd-session[231733]: Connection closed by invalid user test 186.201.54.90 port 44717 [preauth]
Nov 23 21:04:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:25.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210426 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 21:04:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:26 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f9c000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210426 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 21:04:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:26 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:04:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:26.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:04:26 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:04:26 compute-1 ceph-mon[80135]: pgmap v682: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Nov 23 21:04:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:26 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f980016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:27.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:27 compute-1 ceph-mon[80135]: pgmap v683: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Nov 23 21:04:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:28 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:28 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f9c001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:28.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:28 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:29.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:30 compute-1 ceph-mon[80135]: pgmap v684: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Nov 23 21:04:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:30 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f980016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:30 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:04:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:30.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:04:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:30 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f9c001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:31.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:31 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:04:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:32 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:32 compute-1 ceph-mon[80135]: pgmap v685: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Nov 23 21:04:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:32 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f980016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:04:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:32.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:04:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:32 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:04:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:33.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:04:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:04:34 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:34 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f9c001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:34 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:34 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:34.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:34 compute-1 ceph-mon[80135]: pgmap v686: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Nov 23 21:04:34 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:34 : epoch 692376ca : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 21:04:34 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:34 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:35.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:35 compute-1 ceph-mon[80135]: pgmap v687: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s
Nov 23 21:04:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:36 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:36 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f9c002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:36.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:36 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:04:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:36 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:37.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:37 : epoch 692376ca : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 21:04:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:37 : epoch 692376ca : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 21:04:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:38 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:38 compute-1 ceph-mon[80135]: pgmap v688: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 23 21:04:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:38 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:38.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:38 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f9c002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:39.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:40 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:40 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:04:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:40.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:04:40 compute-1 ceph-mon[80135]: pgmap v689: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 23 21:04:40 compute-1 podman[231784]: 2025-11-23 21:04:40.638340641 +0000 UTC m=+0.050327487 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 23 21:04:40 compute-1 podman[231783]: 2025-11-23 21:04:40.677931958 +0000 UTC m=+0.092841132 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 23 21:04:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:40 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:40 : epoch 692376ca : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 21:04:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:41.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:41 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:04:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:42 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f9c002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:42 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:42.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:42 compute-1 ceph-mon[80135]: pgmap v690: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 21:04:42 compute-1 sudo[231829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:04:42 compute-1 sudo[231829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:04:42 compute-1 sudo[231829]: pam_unix(sudo:session): session closed for user root
Nov 23 21:04:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:42 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:43.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:44 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:44 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f9c003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:04:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:44.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:04:44 compute-1 ceph-mon[80135]: pgmap v691: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 21:04:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:44 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:45.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:45 compute-1 ceph-mon[80135]: pgmap v692: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 21:04:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:46 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210446 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 21:04:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:46 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:04:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:46.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:04:46 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:04:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:46 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f9c003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:47.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:47 compute-1 podman[231857]: 2025-11-23 21:04:47.637111181 +0000 UTC m=+0.056836372 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 23 21:04:48 compute-1 ceph-mon[80135]: pgmap v693: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 21:04:48 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:48 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:48 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:48 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:48.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:48 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:48 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:49 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:04:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:49.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:50 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f9c003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:50 compute-1 ceph-mon[80135]: pgmap v694: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 21:04:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:50 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:50.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:50 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:04:51.061 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:04:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:04:51.062 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:04:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:04:51.062 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:04:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:51.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:51 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:04:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:52 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:52 compute-1 ceph-mon[80135]: pgmap v695: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 21:04:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:52 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f9c003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:04:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:52.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:04:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:52 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:53.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:54 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:54 compute-1 ceph-mon[80135]: pgmap v696: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 23 21:04:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:54 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:54.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:54 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fbc0013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:55.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:55 compute-1 nova_compute[230183]: 2025-11-23 21:04:55.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:04:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:56 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:56 compute-1 ceph-mon[80135]: pgmap v697: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 23 21:04:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:56 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:56 compute-1 nova_compute[230183]: 2025-11-23 21:04:56.422 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:04:56 compute-1 nova_compute[230183]: 2025-11-23 21:04:56.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:04:56 compute-1 nova_compute[230183]: 2025-11-23 21:04:56.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:04:56 compute-1 nova_compute[230183]: 2025-11-23 21:04:56.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:04:56 compute-1 nova_compute[230183]: 2025-11-23 21:04:56.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:04:56 compute-1 nova_compute[230183]: 2025-11-23 21:04:56.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 23 21:04:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:56.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:56 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:04:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:56 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:57 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2124596384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:04:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:57.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:57 compute-1 nova_compute[230183]: 2025-11-23 21:04:57.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:04:57 compute-1 nova_compute[230183]: 2025-11-23 21:04:57.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:04:57 compute-1 nova_compute[230183]: 2025-11-23 21:04:57.445 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:04:57 compute-1 nova_compute[230183]: 2025-11-23 21:04:57.446 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:04:57 compute-1 nova_compute[230183]: 2025-11-23 21:04:57.447 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:04:57 compute-1 nova_compute[230183]: 2025-11-23 21:04:57.447 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:04:57 compute-1 nova_compute[230183]: 2025-11-23 21:04:57.447 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:04:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:04:57 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3199479046' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:04:57 compute-1 nova_compute[230183]: 2025-11-23 21:04:57.891 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:04:58 compute-1 nova_compute[230183]: 2025-11-23 21:04:58.030 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:04:58 compute-1 nova_compute[230183]: 2025-11-23 21:04:58.031 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5274MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:04:58 compute-1 nova_compute[230183]: 2025-11-23 21:04:58.032 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:04:58 compute-1 nova_compute[230183]: 2025-11-23 21:04:58.033 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:04:58 compute-1 nova_compute[230183]: 2025-11-23 21:04:58.087 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:04:58 compute-1 nova_compute[230183]: 2025-11-23 21:04:58.087 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:04:58 compute-1 nova_compute[230183]: 2025-11-23 21:04:58.101 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:04:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:58 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fbc001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:58 compute-1 ceph-mon[80135]: pgmap v698: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:04:58 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3486645363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:04:58 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3199479046' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:04:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:58 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:04:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:58.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:04:58 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:04:58 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3348488227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:04:58 compute-1 nova_compute[230183]: 2025-11-23 21:04:58.554 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:04:58 compute-1 nova_compute[230183]: 2025-11-23 21:04:58.559 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:04:58 compute-1 nova_compute[230183]: 2025-11-23 21:04:58.576 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:04:58 compute-1 nova_compute[230183]: 2025-11-23 21:04:58.578 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 23 21:04:58 compute-1 nova_compute[230183]: 2025-11-23 21:04:58.578 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:04:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:58 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:04:59 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3348488227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:04:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:04:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:04:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:59.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:04:59 compute-1 nova_compute[230183]: 2025-11-23 21:04:59.579 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:04:59 compute-1 nova_compute[230183]: 2025-11-23 21:04:59.579 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 23 21:04:59 compute-1 nova_compute[230183]: 2025-11-23 21:04:59.580 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 23 21:04:59 compute-1 nova_compute[230183]: 2025-11-23 21:04:59.591 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 23 21:05:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:00 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:00 compute-1 ceph-mon[80135]: pgmap v699: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:05:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:00 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fbc001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:00.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:00 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:01 compute-1 sudo[231930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:05:01 compute-1 sudo[231930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:05:01 compute-1 sudo[231930]: pam_unix(sudo:session): session closed for user root
Nov 23 21:05:01 compute-1 sudo[231955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 23 21:05:01 compute-1 sudo[231955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:05:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:01.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:01 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:05:01 compute-1 podman[232053]: 2025-11-23 21:05:01.698482409 +0000 UTC m=+0.061213561 container exec e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 21:05:01 compute-1 podman[232053]: 2025-11-23 21:05:01.795625686 +0000 UTC m=+0.158356858 container exec_died e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 21:05:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:02 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:02 compute-1 podman[232173]: 2025-11-23 21:05:02.207540944 +0000 UTC m=+0.053726099 container exec 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 21:05:02 compute-1 podman[232173]: 2025-11-23 21:05:02.213202467 +0000 UTC m=+0.059387602 container exec_died 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 21:05:02 compute-1 ceph-mon[80135]: pgmap v700: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:05:02 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 23 21:05:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:02 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.002000052s ======
Nov 23 21:05:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:02.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Nov 23 21:05:02 compute-1 podman[232265]: 2025-11-23 21:05:02.517779022 +0000 UTC m=+0.048443436 container exec 28e57aa42cd2ce22aa90d15386d4b9c6fbc4c7148e89ce4a602828fc7daf9d0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 23 21:05:02 compute-1 podman[232265]: 2025-11-23 21:05:02.529249511 +0000 UTC m=+0.059913895 container exec_died 28e57aa42cd2ce22aa90d15386d4b9c6fbc4c7148e89ce4a602828fc7daf9d0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 21:05:02 compute-1 podman[232329]: 2025-11-23 21:05:02.744685875 +0000 UTC m=+0.060625364 container exec 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei)
Nov 23 21:05:02 compute-1 podman[232329]: 2025-11-23 21:05:02.75820572 +0000 UTC m=+0.074145179 container exec_died 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei)
Nov 23 21:05:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:02 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fbc001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:03 compute-1 podman[232395]: 2025-11-23 21:05:03.018245756 +0000 UTC m=+0.073475981 container exec 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, version=2.2.4, io.openshift.expose-services=, name=keepalived, distribution-scope=public, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 21:05:03 compute-1 sudo[232397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:05:03 compute-1 sudo[232397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:05:03 compute-1 sudo[232397]: pam_unix(sudo:session): session closed for user root
Nov 23 21:05:03 compute-1 podman[232395]: 2025-11-23 21:05:03.055180081 +0000 UTC m=+0.110410216 container exec_died 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, version=2.2.4, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, build-date=2023-02-22T09:23:20, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, com.redhat.component=keepalived-container, description=keepalived for Ceph, distribution-scope=public, io.openshift.expose-services=)
Nov 23 21:05:03 compute-1 sudo[231955]: pam_unix(sudo:session): session closed for user root
Nov 23 21:05:03 compute-1 sudo[232455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:05:03 compute-1 sudo[232455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:05:03 compute-1 sudo[232455]: pam_unix(sudo:session): session closed for user root
Nov 23 21:05:03 compute-1 sudo[232480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 21:05:03 compute-1 sudo[232480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:05:03 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/500924673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:05:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:05:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:05:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:05:03 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3282201788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:05:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:03.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:03 compute-1 sudo[232480]: pam_unix(sudo:session): session closed for user root
Nov 23 21:05:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:04 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:04 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:04 compute-1 ceph-mon[80135]: pgmap v701: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:05:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 23 21:05:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:04.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:04 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:05:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:05.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:05:05 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:05:05 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:05:05 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 23 21:05:05 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:05:05 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 21:05:05 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:05:05 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:05:05 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 21:05:05 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 21:05:05 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:05:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:06 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fbc003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:06 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fbc003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:06.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:06 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:05:06 compute-1 ceph-mon[80135]: pgmap v702: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 23 21:05:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:06 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:07.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:07 compute-1 ceph-mon[80135]: pgmap v703: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:05:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:08 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:08 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f8c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:05:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:08.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:05:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/806054957' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:05:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/806054957' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:05:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:08 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f90000e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:09.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:09 compute-1 ceph-mon[80135]: pgmap v704: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:05:09 compute-1 sudo[232542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 21:05:09 compute-1 sudo[232542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:05:09 compute-1 sudo[232542]: pam_unix(sudo:session): session closed for user root
Nov 23 21:05:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:10 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004400 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:10 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:10.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:10 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:05:10 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:05:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:10 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f8c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:11.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:11 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:05:11 compute-1 podman[232569]: 2025-11-23 21:05:11.652034095 +0000 UTC m=+0.065926337 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 21:05:11 compute-1 podman[232568]: 2025-11-23 21:05:11.704606452 +0000 UTC m=+0.118314889 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 21:05:11 compute-1 ceph-mon[80135]: pgmap v705: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:05:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:12 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f90001920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:12 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004400 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:12.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:12 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:13.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:14 compute-1 ceph-mon[80135]: pgmap v706: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:05:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:14 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f8c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:14 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f90001920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:05:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:14.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:05:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:14 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004400 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:15.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:16 compute-1 ceph-mon[80135]: pgmap v707: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 23 21:05:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:16 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:16 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f8c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:16 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:05:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:05:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:16.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:05:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:16 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f90001920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:17.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:17 compute-1 sshd-session[232617]: Invalid user solv from 161.35.133.66 port 42866
Nov 23 21:05:17 compute-1 sshd-session[232617]: Connection closed by invalid user solv 161.35.133.66 port 42866 [preauth]
Nov 23 21:05:17 compute-1 podman[232619]: 2025-11-23 21:05:17.74423279 +0000 UTC m=+0.067421727 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS)
Nov 23 21:05:18 compute-1 ceph-mon[80135]: pgmap v708: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:05:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:05:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210518 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 21:05:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:18 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004400 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:18 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004400 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:18.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:18 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:05:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:19.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:05:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:20 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f90002db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:20 compute-1 ceph-mon[80135]: pgmap v709: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:05:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:20 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f8c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:20.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:20 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004400 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:05:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:21.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:05:21 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:05:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:22 compute-1 ceph-mon[80135]: pgmap v710: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:05:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f90002db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:05:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:22.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:05:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f8c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:23 compute-1 sudo[232641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:05:23 compute-1 sudo[232641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:05:23 compute-1 sudo[232641]: pam_unix(sudo:session): session closed for user root
Nov 23 21:05:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:23.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:24 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f8c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:24 compute-1 ceph-mon[80135]: pgmap v711: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 23 21:05:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:24 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f8c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:24.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:24 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004400 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:25.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:26 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:26 compute-1 ceph-mon[80135]: pgmap v712: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 85 B/s wr, 0 op/s
Nov 23 21:05:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:26 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f90003ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:26 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:05:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:26.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:26 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f8c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:27 : epoch 692376ca : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 21:05:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:27.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:28 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004400 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:28 compute-1 ceph-mon[80135]: pgmap v713: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 21:05:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:28 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:05:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:28.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:05:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:28 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f90003ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:05:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:29.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:05:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:30 : epoch 692376ca : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 21:05:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:30 : epoch 692376ca : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 21:05:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:30 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f8c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:30 compute-1 ceph-mon[80135]: pgmap v714: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 23 21:05:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:30 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:30.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:30 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:31.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:31 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:05:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:32 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f90003ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:32 compute-1 ceph-mon[80135]: pgmap v715: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 21:05:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:32 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f8c003c10 fd 48 proxy ignored for local
Nov 23 21:05:32 compute-1 kernel: ganesha.nfsd[232539]: segfault at 50 ip 00007f806faca32e sp 00007f8026ffc210 error 4 in libntirpc.so.5.8[7f806faaf000+2c000] likely on CPU 6 (core 0, socket 6)
Nov 23 21:05:32 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 21:05:32 compute-1 systemd[1]: Started Process Core Dump (PID 232671/UID 0).
Nov 23 21:05:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:05:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:32.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:05:33 compute-1 systemd-coredump[232672]: Process 231668 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 58:
                                                    #0  0x00007f806faca32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 23 21:05:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:05:33 compute-1 systemd[1]: systemd-coredump@10-232671-0.service: Deactivated successfully.
Nov 23 21:05:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:33.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:33 compute-1 podman[232677]: 2025-11-23 21:05:33.487627836 +0000 UTC m=+0.028166440 container died 28e57aa42cd2ce22aa90d15386d4b9c6fbc4c7148e89ce4a602828fc7daf9d0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 23 21:05:33 compute-1 systemd[1]: var-lib-containers-storage-overlay-c8959c0210da2eb362aef70369d12cda156853952a6808f0124f32ae90a3c6af-merged.mount: Deactivated successfully.
Nov 23 21:05:33 compute-1 podman[232677]: 2025-11-23 21:05:33.547811427 +0000 UTC m=+0.088350011 container remove 28e57aa42cd2ce22aa90d15386d4b9c6fbc4c7148e89ce4a602828fc7daf9d0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 23 21:05:33 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Main process exited, code=exited, status=139/n/a
Nov 23 21:05:33 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Failed with result 'exit-code'.
Nov 23 21:05:33 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.324s CPU time.
Nov 23 21:05:34 compute-1 ceph-mon[80135]: pgmap v716: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 23 21:05:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:34.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:35.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:36 compute-1 ceph-mon[80135]: pgmap v717: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 21:05:36 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:05:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:05:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:36.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:05:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:37.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210538 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 142ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 21:05:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210538 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 21:05:38 compute-1 ceph-mon[80135]: pgmap v718: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 21:05:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:38.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:39.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - - [23/Nov/2025:21:05:39.955 +0000] "GET /swift/info HTTP/1.1" 200 539 - "python-urllib3/1.26.5" - latency=0.000000000s
Nov 23 21:05:39 compute-1 ceph-mon[80135]: pgmap v719: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 21:05:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:40.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:41 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:05:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:41.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:42 compute-1 ceph-mon[80135]: pgmap v720: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 23 21:05:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:42.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:42 compute-1 podman[232729]: 2025-11-23 21:05:42.698688408 +0000 UTC m=+0.085786501 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 21:05:42 compute-1 podman[232728]: 2025-11-23 21:05:42.760926845 +0000 UTC m=+0.149207560 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 21:05:43 compute-1 sudo[232773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:05:43 compute-1 sudo[232773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:05:43 compute-1 sudo[232773]: pam_unix(sudo:session): session closed for user root
Nov 23 21:05:43 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Scheduled restart job, restart counter is at 11.
Nov 23 21:05:43 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 21:05:43 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.324s CPU time.
Nov 23 21:05:43 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 21:05:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:43.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:44 compute-1 podman[232845]: 2025-11-23 21:05:44.061571547 +0000 UTC m=+0.044455118 container create a1edfbc64c688db19e55c818e2bdf0df61f7d0676d2c3acdb33415079a20ecc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 23 21:05:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e27abaefaba8199288bd5be063d902d57972d49e3076cc8c05a506a1ac3a488/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 21:05:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e27abaefaba8199288bd5be063d902d57972d49e3076cc8c05a506a1ac3a488/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 21:05:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e27abaefaba8199288bd5be063d902d57972d49e3076cc8c05a506a1ac3a488/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 21:05:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e27abaefaba8199288bd5be063d902d57972d49e3076cc8c05a506a1ac3a488/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.fuxuha-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 21:05:44 compute-1 podman[232845]: 2025-11-23 21:05:44.127135443 +0000 UTC m=+0.110019034 container init a1edfbc64c688db19e55c818e2bdf0df61f7d0676d2c3acdb33415079a20ecc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 23 21:05:44 compute-1 podman[232845]: 2025-11-23 21:05:44.133375552 +0000 UTC m=+0.116259123 container start a1edfbc64c688db19e55c818e2bdf0df61f7d0676d2c3acdb33415079a20ecc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 21:05:44 compute-1 bash[232845]: a1edfbc64c688db19e55c818e2bdf0df61f7d0676d2c3acdb33415079a20ecc2
Nov 23 21:05:44 compute-1 podman[232845]: 2025-11-23 21:05:44.04384693 +0000 UTC m=+0.026730521 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 21:05:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:44 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 21:05:44 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 21:05:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:44 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 21:05:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:44 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 21:05:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:44 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 21:05:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:44 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 21:05:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:44 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 21:05:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:44 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 21:05:44 compute-1 ceph-mon[80135]: pgmap v721: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 21:05:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:44 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 21:05:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:44.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:45 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e143 e143: 3 total, 3 up, 3 in
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.371179) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931945371223, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1187, "num_deletes": 251, "total_data_size": 2796786, "memory_usage": 2842576, "flush_reason": "Manual Compaction"}
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931945379401, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 1182461, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23231, "largest_seqno": 24413, "table_properties": {"data_size": 1178229, "index_size": 1820, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10920, "raw_average_key_size": 20, "raw_value_size": 1169096, "raw_average_value_size": 2189, "num_data_blocks": 78, "num_entries": 534, "num_filter_entries": 534, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763931853, "oldest_key_time": 1763931853, "file_creation_time": 1763931945, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 8302 microseconds, and 4314 cpu microseconds.
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.379471) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 1182461 bytes OK
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.379509) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.380922) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.380969) EVENT_LOG_v1 {"time_micros": 1763931945380959, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.380994) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2791094, prev total WAL file size 2791094, number of live WAL files 2.
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.382274) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353033' seq:72057594037927935, type:22 .. '6D67727374617400373535' seq:0, type:0; will stop at (end)
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(1154KB)], [42(14MB)]
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931945382328, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 16496168, "oldest_snapshot_seqno": -1}
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 5491 keys, 13037002 bytes, temperature: kUnknown
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931945538552, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 13037002, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13001258, "index_size": 20914, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13765, "raw_key_size": 138625, "raw_average_key_size": 25, "raw_value_size": 12902921, "raw_average_value_size": 2349, "num_data_blocks": 856, "num_entries": 5491, "num_filter_entries": 5491, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763931945, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.539034) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 13037002 bytes
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.540634) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.5 rd, 83.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 14.6 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(25.0) write-amplify(11.0) OK, records in: 5974, records dropped: 483 output_compression: NoCompression
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.540660) EVENT_LOG_v1 {"time_micros": 1763931945540647, "job": 24, "event": "compaction_finished", "compaction_time_micros": 156408, "compaction_time_cpu_micros": 36323, "output_level": 6, "num_output_files": 1, "total_output_size": 13037002, "num_input_records": 5974, "num_output_records": 5491, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931945541299, "job": 24, "event": "table_file_deletion", "file_number": 44}
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931945544896, "job": 24, "event": "table_file_deletion", "file_number": 42}
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.382147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.545112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.545124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.545128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.545131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:05:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.545134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:05:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:45.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:46 compute-1 ceph-mon[80135]: pgmap v722: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 23 21:05:46 compute-1 ceph-mon[80135]: osdmap e143: 3 total, 3 up, 3 in
Nov 23 21:05:46 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e144 e144: 3 total, 3 up, 3 in
Nov 23 21:05:46 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:05:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:46.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e145 e145: 3 total, 3 up, 3 in
Nov 23 21:05:47 compute-1 ceph-mon[80135]: osdmap e144: 3 total, 3 up, 3 in
Nov 23 21:05:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:47.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:48 compute-1 ceph-mon[80135]: pgmap v725: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 21:05:48 compute-1 ceph-mon[80135]: osdmap e145: 3 total, 3 up, 3 in
Nov 23 21:05:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:05:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:48.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:48 compute-1 podman[232905]: 2025-11-23 21:05:48.64822139 +0000 UTC m=+0.060527452 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 21:05:49 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e146 e146: 3 total, 3 up, 3 in
Nov 23 21:05:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:49.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:50 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 21:05:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:50 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 21:05:50 compute-1 ceph-mon[80135]: pgmap v727: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 21:05:50 compute-1 ceph-mon[80135]: osdmap e146: 3 total, 3 up, 3 in
Nov 23 21:05:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:50.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:05:51.062 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:05:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:05:51.063 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:05:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:05:51.063 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:05:51 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:05:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:05:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:51.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:05:52 compute-1 ceph-mon[80135]: pgmap v729: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 6.9 MiB/s wr, 62 op/s
Nov 23 21:05:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:05:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:52.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:05:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:53.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:54 compute-1 ceph-mon[80135]: pgmap v730: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 5.9 MiB/s wr, 53 op/s
Nov 23 21:05:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:54.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:55 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e147 e147: 3 total, 3 up, 3 in
Nov 23 21:05:55 compute-1 nova_compute[230183]: 2025-11-23 21:05:55.435 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:05:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:05:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:55.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 21:05:56 compute-1 ceph-mon[80135]: pgmap v731: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 5.1 MiB/s wr, 52 op/s
Nov 23 21:05:56 compute-1 ceph-mon[80135]: osdmap e147: 3 total, 3 up, 3 in
Nov 23 21:05:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ac000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:56 compute-1 nova_compute[230183]: 2025-11-23 21:05:56.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:05:56 compute-1 nova_compute[230183]: 2025-11-23 21:05:56.428 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:05:56 compute-1 nova_compute[230183]: 2025-11-23 21:05:56.428 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:05:56 compute-1 nova_compute[230183]: 2025-11-23 21:05:56.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 23 21:05:56 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:05:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:56.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:57 : epoch 69237728 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8a00014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:57 compute-1 nova_compute[230183]: 2025-11-23 21:05:57.424 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:05:57 compute-1 nova_compute[230183]: 2025-11-23 21:05:57.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:05:57 compute-1 nova_compute[230183]: 2025-11-23 21:05:57.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:05:57 compute-1 nova_compute[230183]: 2025-11-23 21:05:57.448 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:05:57 compute-1 nova_compute[230183]: 2025-11-23 21:05:57.449 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:05:57 compute-1 nova_compute[230183]: 2025-11-23 21:05:57.449 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:05:57 compute-1 nova_compute[230183]: 2025-11-23 21:05:57.449 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:05:57 compute-1 nova_compute[230183]: 2025-11-23 21:05:57.449 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:05:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:05:57 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1302579223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:05:57 compute-1 nova_compute[230183]: 2025-11-23 21:05:57.866 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:05:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:57.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:58 compute-1 nova_compute[230183]: 2025-11-23 21:05:58.027 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:05:58 compute-1 nova_compute[230183]: 2025-11-23 21:05:58.029 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5227MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:05:58 compute-1 nova_compute[230183]: 2025-11-23 21:05:58.029 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:05:58 compute-1 nova_compute[230183]: 2025-11-23 21:05:58.030 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:05:58 compute-1 nova_compute[230183]: 2025-11-23 21:05:58.117 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:05:58 compute-1 nova_compute[230183]: 2025-11-23 21:05:58.118 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:05:58 compute-1 nova_compute[230183]: 2025-11-23 21:05:58.135 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:05:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:58 : epoch 69237728 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb888000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210558 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 21:05:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:58 : epoch 69237728 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ac000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:58 compute-1 ceph-mon[80135]: pgmap v733: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 5.1 MiB/s wr, 52 op/s
Nov 23 21:05:58 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1302579223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:05:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:58.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:05:58 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:05:58 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1938272778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:05:58 compute-1 nova_compute[230183]: 2025-11-23 21:05:58.611 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:05:58 compute-1 nova_compute[230183]: 2025-11-23 21:05:58.615 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:05:58 compute-1 nova_compute[230183]: 2025-11-23 21:05:58.633 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:05:58 compute-1 nova_compute[230183]: 2025-11-23 21:05:58.634 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 23 21:05:58 compute-1 nova_compute[230183]: 2025-11-23 21:05:58.635 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:05:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:59 : epoch 69237728 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb88c000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:05:59 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1938272778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:05:59 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2345273299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:05:59 compute-1 nova_compute[230183]: 2025-11-23 21:05:59.635 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:05:59 compute-1 nova_compute[230183]: 2025-11-23 21:05:59.635 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 23 21:05:59 compute-1 nova_compute[230183]: 2025-11-23 21:05:59.636 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 23 21:05:59 compute-1 nova_compute[230183]: 2025-11-23 21:05:59.657 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 23 21:05:59 compute-1 nova_compute[230183]: 2025-11-23 21:05:59.657 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:05:59 compute-1 nova_compute[230183]: 2025-11-23 21:05:59.658 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:05:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:05:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:05:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:59.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:06:00 : epoch 69237728 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8a00021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:06:00 : epoch 69237728 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8880016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:00 compute-1 ceph-mon[80135]: pgmap v734: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.9 MiB/s wr, 31 op/s
Nov 23 21:06:00 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2552729546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:06:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:00.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:06:01 : epoch 69237728 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ac000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:01 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:06:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:06:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:01.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:06:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:06:02 : epoch 69237728 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8a00021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:06:02 : epoch 69237728 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb88c001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:02 compute-1 ceph-mon[80135]: pgmap v735: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 818 B/s wr, 5 op/s
Nov 23 21:06:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:06:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:02.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:06:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:06:03 : epoch 69237728 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8880016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:03 compute-1 sudo[232992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:06:03 compute-1 sudo[232992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:06:03 compute-1 sudo[232992]: pam_unix(sudo:session): session closed for user root
Nov 23 21:06:03 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/266415326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:06:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:06:03 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2158962747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:06:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:06:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:03.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:06:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:06:04 : epoch 69237728 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ac000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:04 compute-1 kernel: ganesha.nfsd[232933]: segfault at 50 ip 00007fb95c13d32e sp 00007fb92a7fb210 error 4 in libntirpc.so.5.8[7fb95c122000+2c000] likely on CPU 2 (core 0, socket 2)
Nov 23 21:06:04 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 21:06:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:06:04 : epoch 69237728 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8a00021d0 fd 38 proxy ignored for local
Nov 23 21:06:04 compute-1 systemd[1]: Started Process Core Dump (PID 233018/UID 0).
Nov 23 21:06:04 compute-1 ceph-mon[80135]: pgmap v736: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 818 B/s wr, 5 op/s
Nov 23 21:06:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000028s ======
Nov 23 21:06:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:04.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 23 21:06:05 compute-1 systemd-coredump[233019]: Process 232865 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 43:
                                                    #0  0x00007fb95c13d32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 23 21:06:05 compute-1 systemd[1]: systemd-coredump@11-233018-0.service: Deactivated successfully.
Nov 23 21:06:05 compute-1 systemd[1]: systemd-coredump@11-233018-0.service: Consumed 1.152s CPU time.
Nov 23 21:06:05 compute-1 podman[233025]: 2025-11-23 21:06:05.707082754 +0000 UTC m=+0.044404512 container died a1edfbc64c688db19e55c818e2bdf0df61f7d0676d2c3acdb33415079a20ecc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 21:06:05 compute-1 systemd[1]: var-lib-containers-storage-overlay-2e27abaefaba8199288bd5be063d902d57972d49e3076cc8c05a506a1ac3a488-merged.mount: Deactivated successfully.
Nov 23 21:06:05 compute-1 podman[233025]: 2025-11-23 21:06:05.755076134 +0000 UTC m=+0.092397892 container remove a1edfbc64c688db19e55c818e2bdf0df61f7d0676d2c3acdb33415079a20ecc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 23 21:06:05 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Main process exited, code=exited, status=139/n/a
Nov 23 21:06:05 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Failed with result 'exit-code'.
Nov 23 21:06:05 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.556s CPU time.
Nov 23 21:06:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:05.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:06 compute-1 ceph-mon[80135]: pgmap v737: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 B/s wr, 0 op/s
Nov 23 21:06:06 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:06:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:06.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:07.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:08 compute-1 ceph-mon[80135]: pgmap v738: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 261 B/s rd, 0 B/s wr, 0 op/s
Nov 23 21:06:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/2701893924' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:06:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/2701893924' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:06:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:06:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:08.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:06:09 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:09.032 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:06:09 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:09.033 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 23 21:06:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:09.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:10 compute-1 sudo[233070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:06:10 compute-1 sudo[233070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:06:10 compute-1 sudo[233070]: pam_unix(sudo:session): session closed for user root
Nov 23 21:06:10 compute-1 sudo[233095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 21:06:10 compute-1 sudo[233095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:06:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210610 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 21:06:10 compute-1 ceph-mon[80135]: pgmap v739: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 21:06:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000028s ======
Nov 23 21:06:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:10.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 23 21:06:10 compute-1 sudo[233095]: pam_unix(sudo:session): session closed for user root
Nov 23 21:06:11 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:06:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:06:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:11.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:06:12 compute-1 ceph-mon[80135]: pgmap v740: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 23 21:06:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:12.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:12 compute-1 sshd-session[233152]: Invalid user ubuntu from 92.118.39.92 port 58938
Nov 23 21:06:12 compute-1 podman[233155]: 2025-11-23 21:06:12.983179193 +0000 UTC m=+0.064894460 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Nov 23 21:06:13 compute-1 sshd-session[233152]: Connection closed by invalid user ubuntu 92.118.39.92 port 58938 [preauth]
Nov 23 21:06:13 compute-1 podman[233154]: 2025-11-23 21:06:13.01982083 +0000 UTC m=+0.100983252 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Nov 23 21:06:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:13.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:14 compute-1 ceph-mon[80135]: pgmap v741: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 21:06:14 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:06:14 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:06:14 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:06:14 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 21:06:14 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:06:14 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:06:14 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 21:06:14 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 21:06:14 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:06:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:14.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:06:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:16.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:06:16 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Scheduled restart job, restart counter is at 12.
Nov 23 21:06:16 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 21:06:16 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.556s CPU time.
Nov 23 21:06:16 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 21:06:16 compute-1 podman[233250]: 2025-11-23 21:06:16.266973119 +0000 UTC m=+0.035694771 container create d5b74120fbf861ec21b580a080981227bcd9c52288af0a95ae65bbf92f739f0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 23 21:06:16 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24653c180c9318b519976f965307614ae6e36c0f21676083060c7a6287ff60f0/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 21:06:16 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24653c180c9318b519976f965307614ae6e36c0f21676083060c7a6287ff60f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 21:06:16 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24653c180c9318b519976f965307614ae6e36c0f21676083060c7a6287ff60f0/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 21:06:16 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24653c180c9318b519976f965307614ae6e36c0f21676083060c7a6287ff60f0/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.fuxuha-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 21:06:16 compute-1 podman[233250]: 2025-11-23 21:06:16.326031196 +0000 UTC m=+0.094752848 container init d5b74120fbf861ec21b580a080981227bcd9c52288af0a95ae65bbf92f739f0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 21:06:16 compute-1 podman[233250]: 2025-11-23 21:06:16.330479779 +0000 UTC m=+0.099201431 container start d5b74120fbf861ec21b580a080981227bcd9c52288af0a95ae65bbf92f739f0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 21:06:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:16 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 21:06:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:16 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 21:06:16 compute-1 bash[233250]: d5b74120fbf861ec21b580a080981227bcd9c52288af0a95ae65bbf92f739f0a
Nov 23 21:06:16 compute-1 podman[233250]: 2025-11-23 21:06:16.252161778 +0000 UTC m=+0.020883460 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 21:06:16 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 21:06:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:16 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 21:06:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:16 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 21:06:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:16 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 21:06:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:16 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 21:06:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:16 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 21:06:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:16 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 21:06:16 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:06:16 compute-1 ceph-mon[80135]: pgmap v742: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 23 21:06:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:06:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:16.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:06:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:18.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:18 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:18.035 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:06:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:18.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:18 compute-1 ceph-mon[80135]: pgmap v743: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 21:06:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:06:19 compute-1 sudo[233310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 21:06:19 compute-1 sudo[233310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:06:19 compute-1 sudo[233310]: pam_unix(sudo:session): session closed for user root
Nov 23 21:06:19 compute-1 podman[233334]: 2025-11-23 21:06:19.223671794 +0000 UTC m=+0.072641454 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2)
Nov 23 21:06:19 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:06:19 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:06:19 compute-1 ceph-mon[80135]: pgmap v744: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 23 21:06:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:20.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:20.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:21 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:06:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:06:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:22.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:06:22 compute-1 ceph-mon[80135]: pgmap v745: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 255 B/s wr, 0 op/s
Nov 23 21:06:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:22 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 21:06:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:22 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 21:06:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:06:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:22.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:06:23 compute-1 sudo[233357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:06:23 compute-1 sudo[233357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:06:23 compute-1 sudo[233357]: pam_unix(sudo:session): session closed for user root
Nov 23 21:06:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:24.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:24 compute-1 ceph-mon[80135]: pgmap v746: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 255 B/s wr, 0 op/s
Nov 23 21:06:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:06:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:24.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:06:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:26.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:26 compute-1 ceph-mon[80135]: pgmap v747: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 21:06:26 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:06:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:06:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:26.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:06:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:28.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:28 compute-1 ceph-mon[80135]: pgmap v748: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 21:06:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 21:06:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 21:06:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 21:06:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 21:06:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 21:06:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 21:06:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 21:06:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 21:06:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 21:06:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 21:06:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 21:06:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 21:06:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 21:06:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 21:06:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 21:06:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 21:06:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 21:06:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 21:06:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 21:06:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 21:06:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 21:06:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 21:06:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 21:06:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 21:06:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 21:06:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 21:06:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 21:06:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:28.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:29 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb144000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:06:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:30.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:06:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:30 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb1300016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:30 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:30.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:30 compute-1 ceph-mon[80135]: pgmap v749: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 23 21:06:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:31 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:31 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:06:31 compute-1 ceph-mon[80135]: pgmap v750: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 23 21:06:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:06:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:32.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:06:32 compute-1 nova_compute[230183]: 2025-11-23 21:06:32.059 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "b88f69cf-a706-408d-8dd0-9c891ac278df" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:06:32 compute-1 nova_compute[230183]: 2025-11-23 21:06:32.060 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:06:32 compute-1 nova_compute[230183]: 2025-11-23 21:06:32.085 230187 DEBUG nova.compute.manager [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 23 21:06:32 compute-1 nova_compute[230183]: 2025-11-23 21:06:32.231 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:06:32 compute-1 nova_compute[230183]: 2025-11-23 21:06:32.231 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:06:32 compute-1 nova_compute[230183]: 2025-11-23 21:06:32.241 230187 DEBUG nova.virt.hardware [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 23 21:06:32 compute-1 nova_compute[230183]: 2025-11-23 21:06:32.242 230187 INFO nova.compute.claims [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Claim successful on node compute-1.ctlplane.example.com
Nov 23 21:06:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:32 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:32 compute-1 nova_compute[230183]: 2025-11-23 21:06:32.355 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:06:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210632 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 21:06:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:32 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb1300016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:32.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:32 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:06:32 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/848485788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:06:32 compute-1 nova_compute[230183]: 2025-11-23 21:06:32.811 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:06:32 compute-1 nova_compute[230183]: 2025-11-23 21:06:32.817 230187 DEBUG nova.compute.provider_tree [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:06:32 compute-1 nova_compute[230183]: 2025-11-23 21:06:32.834 230187 DEBUG nova.scheduler.client.report [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:06:32 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/848485788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:06:32 compute-1 nova_compute[230183]: 2025-11-23 21:06:32.862 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:06:32 compute-1 nova_compute[230183]: 2025-11-23 21:06:32.863 230187 DEBUG nova.compute.manager [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 23 21:06:32 compute-1 nova_compute[230183]: 2025-11-23 21:06:32.931 230187 DEBUG nova.compute.manager [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 23 21:06:32 compute-1 nova_compute[230183]: 2025-11-23 21:06:32.932 230187 DEBUG nova.network.neutron [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 23 21:06:32 compute-1 nova_compute[230183]: 2025-11-23 21:06:32.951 230187 INFO nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 23 21:06:32 compute-1 nova_compute[230183]: 2025-11-23 21:06:32.969 230187 DEBUG nova.compute.manager [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 23 21:06:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:33 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb1200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:33 compute-1 nova_compute[230183]: 2025-11-23 21:06:33.069 230187 DEBUG nova.compute.manager [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 23 21:06:33 compute-1 nova_compute[230183]: 2025-11-23 21:06:33.070 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 23 21:06:33 compute-1 nova_compute[230183]: 2025-11-23 21:06:33.071 230187 INFO nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Creating image(s)
Nov 23 21:06:33 compute-1 nova_compute[230183]: 2025-11-23 21:06:33.098 230187 DEBUG nova.storage.rbd_utils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image b88f69cf-a706-408d-8dd0-9c891ac278df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:06:33 compute-1 nova_compute[230183]: 2025-11-23 21:06:33.122 230187 DEBUG nova.storage.rbd_utils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image b88f69cf-a706-408d-8dd0-9c891ac278df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:06:33 compute-1 nova_compute[230183]: 2025-11-23 21:06:33.147 230187 DEBUG nova.storage.rbd_utils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image b88f69cf-a706-408d-8dd0-9c891ac278df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:06:33 compute-1 nova_compute[230183]: 2025-11-23 21:06:33.150 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:06:33 compute-1 nova_compute[230183]: 2025-11-23 21:06:33.151 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:06:33 compute-1 ceph-mon[80135]: pgmap v751: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 767 B/s wr, 2 op/s
Nov 23 21:06:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:06:33 compute-1 nova_compute[230183]: 2025-11-23 21:06:33.908 230187 WARNING oslo_policy.policy [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 23 21:06:33 compute-1 nova_compute[230183]: 2025-11-23 21:06:33.909 230187 WARNING oslo_policy.policy [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 23 21:06:33 compute-1 nova_compute[230183]: 2025-11-23 21:06:33.912 230187 DEBUG nova.policy [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9fb5352c62684f2ba3a326a953a10dfe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '782593db60784ab8bff41fe87d72ff5f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 23 21:06:33 compute-1 nova_compute[230183]: 2025-11-23 21:06:33.931 230187 DEBUG nova.virt.libvirt.imagebackend [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image locations are: [{'url': 'rbd://03808be8-ae4a-5548-82e6-4a294f1bc627/images/3c45fa6c-8a99-4359-a34e-d89f4e1e77d0/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://03808be8-ae4a-5548-82e6-4a294f1bc627/images/3c45fa6c-8a99-4359-a34e-d89f4e1e77d0/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 23 21:06:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:06:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:34.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:06:34 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:34 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:34 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:34 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:34.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:35 compute-1 nova_compute[230183]: 2025-11-23 21:06:35.030 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:06:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:35 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb1300016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:35 compute-1 nova_compute[230183]: 2025-11-23 21:06:35.082 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56.part --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:06:35 compute-1 nova_compute[230183]: 2025-11-23 21:06:35.083 230187 DEBUG nova.virt.images [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] 3c45fa6c-8a99-4359-a34e-d89f4e1e77d0 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 23 21:06:35 compute-1 nova_compute[230183]: 2025-11-23 21:06:35.084 230187 DEBUG nova.privsep.utils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 23 21:06:35 compute-1 nova_compute[230183]: 2025-11-23 21:06:35.085 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56.part /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:06:35 compute-1 nova_compute[230183]: 2025-11-23 21:06:35.099 230187 DEBUG nova.network.neutron [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Successfully created port: f23315bc-0f2d-4e45-91a2-0f72a8929b88 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 23 21:06:35 compute-1 nova_compute[230183]: 2025-11-23 21:06:35.391 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56.part /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56.converted" returned: 0 in 0.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:06:35 compute-1 nova_compute[230183]: 2025-11-23 21:06:35.395 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:06:35 compute-1 nova_compute[230183]: 2025-11-23 21:06:35.444 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56.converted --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:06:35 compute-1 nova_compute[230183]: 2025-11-23 21:06:35.445 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.294s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:06:35 compute-1 nova_compute[230183]: 2025-11-23 21:06:35.471 230187 DEBUG nova.storage.rbd_utils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image b88f69cf-a706-408d-8dd0-9c891ac278df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:06:35 compute-1 nova_compute[230183]: 2025-11-23 21:06:35.474 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 b88f69cf-a706-408d-8dd0-9c891ac278df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.504165) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931995504207, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 814, "num_deletes": 255, "total_data_size": 1683854, "memory_usage": 1710824, "flush_reason": "Manual Compaction"}
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931995520330, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 1093658, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24418, "largest_seqno": 25227, "table_properties": {"data_size": 1089759, "index_size": 1679, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8609, "raw_average_key_size": 18, "raw_value_size": 1081741, "raw_average_value_size": 2351, "num_data_blocks": 74, "num_entries": 460, "num_filter_entries": 460, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763931946, "oldest_key_time": 1763931946, "file_creation_time": 1763931995, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 16277 microseconds, and 5707 cpu microseconds.
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.520439) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 1093658 bytes OK
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.520488) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.541748) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.541807) EVENT_LOG_v1 {"time_micros": 1763931995541794, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.541833) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 1679569, prev total WAL file size 1679569, number of live WAL files 2.
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.543570) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353032' seq:0, type:0; will stop at (end)
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(1068KB)], [45(12MB)]
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931995543626, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 14130660, "oldest_snapshot_seqno": -1}
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 5424 keys, 13968182 bytes, temperature: kUnknown
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931995811204, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 13968182, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13931576, "index_size": 21968, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 138465, "raw_average_key_size": 25, "raw_value_size": 13833097, "raw_average_value_size": 2550, "num_data_blocks": 897, "num_entries": 5424, "num_filter_entries": 5424, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763931995, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.811517) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13968182 bytes
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.843992) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 52.8 rd, 52.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 12.4 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(25.7) write-amplify(12.8) OK, records in: 5951, records dropped: 527 output_compression: NoCompression
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.844020) EVENT_LOG_v1 {"time_micros": 1763931995844009, "job": 26, "event": "compaction_finished", "compaction_time_micros": 267666, "compaction_time_cpu_micros": 36772, "output_level": 6, "num_output_files": 1, "total_output_size": 13968182, "num_input_records": 5951, "num_output_records": 5424, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931995844333, "job": 26, "event": "table_file_deletion", "file_number": 47}
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931995846184, "job": 26, "event": "table_file_deletion", "file_number": 45}
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.543480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.846274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.846280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.846282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.846283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:06:35 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.846285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:06:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:36.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:36 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb1200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:36 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:36 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:06:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:36.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:36 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e148 e148: 3 total, 3 up, 3 in
Nov 23 21:06:36 compute-1 nova_compute[230183]: 2025-11-23 21:06:36.900 230187 DEBUG nova.network.neutron [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Successfully updated port: f23315bc-0f2d-4e45-91a2-0f72a8929b88 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 23 21:06:36 compute-1 nova_compute[230183]: 2025-11-23 21:06:36.919 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:06:36 compute-1 nova_compute[230183]: 2025-11-23 21:06:36.919 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquired lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:06:36 compute-1 nova_compute[230183]: 2025-11-23 21:06:36.919 230187 DEBUG nova.network.neutron [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 23 21:06:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:37 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:37 compute-1 nova_compute[230183]: 2025-11-23 21:06:37.095 230187 DEBUG nova.network.neutron [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 23 21:06:37 compute-1 nova_compute[230183]: 2025-11-23 21:06:37.403 230187 DEBUG nova.compute.manager [req-345d58e9-898a-4a71-8f05-7b645ced737e req-ae82acbf-3479-437b-aa85-1f87e2911a12 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received event network-changed-f23315bc-0f2d-4e45-91a2-0f72a8929b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:06:37 compute-1 nova_compute[230183]: 2025-11-23 21:06:37.404 230187 DEBUG nova.compute.manager [req-345d58e9-898a-4a71-8f05-7b645ced737e req-ae82acbf-3479-437b-aa85-1f87e2911a12 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Refreshing instance network info cache due to event network-changed-f23315bc-0f2d-4e45-91a2-0f72a8929b88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 23 21:06:37 compute-1 nova_compute[230183]: 2025-11-23 21:06:37.404 230187 DEBUG oslo_concurrency.lockutils [req-345d58e9-898a-4a71-8f05-7b645ced737e req-ae82acbf-3479-437b-aa85-1f87e2911a12 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:06:37 compute-1 ceph-mon[80135]: pgmap v752: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 689 KiB/s rd, 767 B/s wr, 9 op/s
Nov 23 21:06:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:38.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:38 compute-1 nova_compute[230183]: 2025-11-23 21:06:38.137 230187 DEBUG nova.network.neutron [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Updating instance_info_cache with network_info: [{"id": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "address": "fa:16:3e:6f:7a:f0", "network": {"id": "7aadcd86-30a0-48ed-988a-324cae3af3e6", "bridge": "br-int", "label": "tempest-network-smoke--57523881", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf23315bc-0f", "ovs_interfaceid": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:06:38 compute-1 nova_compute[230183]: 2025-11-23 21:06:38.153 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Releasing lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:06:38 compute-1 nova_compute[230183]: 2025-11-23 21:06:38.153 230187 DEBUG nova.compute.manager [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Instance network_info: |[{"id": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "address": "fa:16:3e:6f:7a:f0", "network": {"id": "7aadcd86-30a0-48ed-988a-324cae3af3e6", "bridge": "br-int", "label": "tempest-network-smoke--57523881", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf23315bc-0f", "ovs_interfaceid": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 23 21:06:38 compute-1 nova_compute[230183]: 2025-11-23 21:06:38.153 230187 DEBUG oslo_concurrency.lockutils [req-345d58e9-898a-4a71-8f05-7b645ced737e req-ae82acbf-3479-437b-aa85-1f87e2911a12 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:06:38 compute-1 nova_compute[230183]: 2025-11-23 21:06:38.153 230187 DEBUG nova.network.neutron [req-345d58e9-898a-4a71-8f05-7b645ced737e req-ae82acbf-3479-437b-aa85-1f87e2911a12 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Refreshing network info cache for port f23315bc-0f2d-4e45-91a2-0f72a8929b88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 23 21:06:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:38 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:38 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb1200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:38.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:39 compute-1 nova_compute[230183]: 2025-11-23 21:06:39.389 230187 DEBUG nova.network.neutron [req-345d58e9-898a-4a71-8f05-7b645ced737e req-ae82acbf-3479-437b-aa85-1f87e2911a12 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Updated VIF entry in instance network info cache for port f23315bc-0f2d-4e45-91a2-0f72a8929b88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 23 21:06:39 compute-1 nova_compute[230183]: 2025-11-23 21:06:39.389 230187 DEBUG nova.network.neutron [req-345d58e9-898a-4a71-8f05-7b645ced737e req-ae82acbf-3479-437b-aa85-1f87e2911a12 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Updating instance_info_cache with network_info: [{"id": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "address": "fa:16:3e:6f:7a:f0", "network": {"id": "7aadcd86-30a0-48ed-988a-324cae3af3e6", "bridge": "br-int", "label": "tempest-network-smoke--57523881", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf23315bc-0f", "ovs_interfaceid": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:06:39 compute-1 nova_compute[230183]: 2025-11-23 21:06:39.401 230187 DEBUG oslo_concurrency.lockutils [req-345d58e9-898a-4a71-8f05-7b645ced737e req-ae82acbf-3479-437b-aa85-1f87e2911a12 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:06:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:39 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c0032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:39 compute-1 ceph-mon[80135]: osdmap e148: 3 total, 3 up, 3 in
Nov 23 21:06:39 compute-1 ceph-mon[80135]: pgmap v754: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 825 KiB/s rd, 102 B/s wr, 7 op/s
Nov 23 21:06:39 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e149 e149: 3 total, 3 up, 3 in
Nov 23 21:06:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:40.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:40 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:40 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.448 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 b88f69cf-a706-408d-8dd0-9c891ac278df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.974s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:06:40 compute-1 ceph-mon[80135]: pgmap v755: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 825 KiB/s rd, 102 B/s wr, 7 op/s
Nov 23 21:06:40 compute-1 ceph-mon[80135]: osdmap e149: 3 total, 3 up, 3 in
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.511 230187 DEBUG nova.storage.rbd_utils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] resizing rbd image b88f69cf-a706-408d-8dd0-9c891ac278df_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.609 230187 DEBUG nova.objects.instance [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'migration_context' on Instance uuid b88f69cf-a706-408d-8dd0-9c891ac278df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.621 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.623 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Ensure instance console log exists: /var/lib/nova/instances/b88f69cf-a706-408d-8dd0-9c891ac278df/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.624 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.624 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.624 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.627 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Start _get_guest_xml network_info=[{"id": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "address": "fa:16:3e:6f:7a:f0", "network": {"id": "7aadcd86-30a0-48ed-988a-324cae3af3e6", "bridge": "br-int", "label": "tempest-network-smoke--57523881", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf23315bc-0f", "ovs_interfaceid": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'guest_format': None, 'encrypted': False, 'size': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': '3c45fa6c-8a99-4359-a34e-d89f4e1e77d0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.633 230187 WARNING nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.638 230187 DEBUG nova.virt.libvirt.host [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.639 230187 DEBUG nova.virt.libvirt.host [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.641 230187 DEBUG nova.virt.libvirt.host [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.642 230187 DEBUG nova.virt.libvirt.host [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.642 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.643 230187 DEBUG nova.virt.hardware [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-23T21:05:42Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='56044b93-2979-48aa-b67f-c37e1b489306',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.643 230187 DEBUG nova.virt.hardware [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.643 230187 DEBUG nova.virt.hardware [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.644 230187 DEBUG nova.virt.hardware [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.644 230187 DEBUG nova.virt.hardware [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.644 230187 DEBUG nova.virt.hardware [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 23 21:06:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.644 230187 DEBUG nova.virt.hardware [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 23 21:06:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.645 230187 DEBUG nova.virt.hardware [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 23 21:06:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:40.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.645 230187 DEBUG nova.virt.hardware [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.645 230187 DEBUG nova.virt.hardware [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.646 230187 DEBUG nova.virt.hardware [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.650 230187 DEBUG nova.privsep.utils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 23 21:06:40 compute-1 nova_compute[230183]: 2025-11-23 21:06:40.650 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:06:41 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 21:06:41 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4087426359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.076 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.103 230187 DEBUG nova.storage.rbd_utils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image b88f69cf-a706-408d-8dd0-9c891ac278df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.106 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:06:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:41 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:41 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/4087426359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:06:41 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:06:41 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 21:06:41 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1092088785' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.534 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.539 230187 DEBUG nova.virt.libvirt.vif [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:06:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2101370279',display_name='tempest-TestNetworkBasicOps-server-2101370279',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2101370279',id=1,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPkopXsVozaBPjiL+h6NejRz4cW0k9/uA5JpHUVBsNmasGNuNCs7C0SGQ6LkonC2lifS0mLNUtTMnfgtFGQBRj5+CsXOBseSmB+++OQ3W87ZPdTUTnkg9uBrGbnjrus9+A==',key_name='tempest-TestNetworkBasicOps-1595128200',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-97azc21p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:06:33Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=b88f69cf-a706-408d-8dd0-9c891ac278df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "address": "fa:16:3e:6f:7a:f0", "network": {"id": "7aadcd86-30a0-48ed-988a-324cae3af3e6", "bridge": "br-int", "label": "tempest-network-smoke--57523881", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf23315bc-0f", "ovs_interfaceid": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.540 230187 DEBUG nova.network.os_vif_util [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "address": "fa:16:3e:6f:7a:f0", "network": {"id": "7aadcd86-30a0-48ed-988a-324cae3af3e6", "bridge": "br-int", "label": "tempest-network-smoke--57523881", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf23315bc-0f", "ovs_interfaceid": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.542 230187 DEBUG nova.network.os_vif_util [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:7a:f0,bridge_name='br-int',has_traffic_filtering=True,id=f23315bc-0f2d-4e45-91a2-0f72a8929b88,network=Network(7aadcd86-30a0-48ed-988a-324cae3af3e6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf23315bc-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.547 230187 DEBUG nova.objects.instance [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'pci_devices' on Instance uuid b88f69cf-a706-408d-8dd0-9c891ac278df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.572 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] End _get_guest_xml xml=<domain type="kvm">
Nov 23 21:06:41 compute-1 nova_compute[230183]:   <uuid>b88f69cf-a706-408d-8dd0-9c891ac278df</uuid>
Nov 23 21:06:41 compute-1 nova_compute[230183]:   <name>instance-00000001</name>
Nov 23 21:06:41 compute-1 nova_compute[230183]:   <memory>131072</memory>
Nov 23 21:06:41 compute-1 nova_compute[230183]:   <vcpu>1</vcpu>
Nov 23 21:06:41 compute-1 nova_compute[230183]:   <metadata>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <nova:name>tempest-TestNetworkBasicOps-server-2101370279</nova:name>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <nova:creationTime>2025-11-23 21:06:40</nova:creationTime>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <nova:flavor name="m1.nano">
Nov 23 21:06:41 compute-1 nova_compute[230183]:         <nova:memory>128</nova:memory>
Nov 23 21:06:41 compute-1 nova_compute[230183]:         <nova:disk>1</nova:disk>
Nov 23 21:06:41 compute-1 nova_compute[230183]:         <nova:swap>0</nova:swap>
Nov 23 21:06:41 compute-1 nova_compute[230183]:         <nova:ephemeral>0</nova:ephemeral>
Nov 23 21:06:41 compute-1 nova_compute[230183]:         <nova:vcpus>1</nova:vcpus>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       </nova:flavor>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <nova:owner>
Nov 23 21:06:41 compute-1 nova_compute[230183]:         <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 21:06:41 compute-1 nova_compute[230183]:         <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       </nova:owner>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <nova:ports>
Nov 23 21:06:41 compute-1 nova_compute[230183]:         <nova:port uuid="f23315bc-0f2d-4e45-91a2-0f72a8929b88">
Nov 23 21:06:41 compute-1 nova_compute[230183]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:         </nova:port>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       </nova:ports>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     </nova:instance>
Nov 23 21:06:41 compute-1 nova_compute[230183]:   </metadata>
Nov 23 21:06:41 compute-1 nova_compute[230183]:   <sysinfo type="smbios">
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <system>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <entry name="manufacturer">RDO</entry>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <entry name="product">OpenStack Compute</entry>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <entry name="serial">b88f69cf-a706-408d-8dd0-9c891ac278df</entry>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <entry name="uuid">b88f69cf-a706-408d-8dd0-9c891ac278df</entry>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <entry name="family">Virtual Machine</entry>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     </system>
Nov 23 21:06:41 compute-1 nova_compute[230183]:   </sysinfo>
Nov 23 21:06:41 compute-1 nova_compute[230183]:   <os>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <boot dev="hd"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <smbios mode="sysinfo"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:   </os>
Nov 23 21:06:41 compute-1 nova_compute[230183]:   <features>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <acpi/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <apic/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <vmcoreinfo/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:   </features>
Nov 23 21:06:41 compute-1 nova_compute[230183]:   <clock offset="utc">
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <timer name="pit" tickpolicy="delay"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <timer name="hpet" present="no"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:   </clock>
Nov 23 21:06:41 compute-1 nova_compute[230183]:   <cpu mode="host-model" match="exact">
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <topology sockets="1" cores="1" threads="1"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:   </cpu>
Nov 23 21:06:41 compute-1 nova_compute[230183]:   <devices>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <disk type="network" device="disk">
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <driver type="raw" cache="none"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <source protocol="rbd" name="vms/b88f69cf-a706-408d-8dd0-9c891ac278df_disk">
Nov 23 21:06:41 compute-1 nova_compute[230183]:         <host name="192.168.122.100" port="6789"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:         <host name="192.168.122.102" port="6789"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:         <host name="192.168.122.101" port="6789"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       </source>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <auth username="openstack">
Nov 23 21:06:41 compute-1 nova_compute[230183]:         <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <target dev="vda" bus="virtio"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <disk type="network" device="cdrom">
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <driver type="raw" cache="none"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <source protocol="rbd" name="vms/b88f69cf-a706-408d-8dd0-9c891ac278df_disk.config">
Nov 23 21:06:41 compute-1 nova_compute[230183]:         <host name="192.168.122.100" port="6789"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:         <host name="192.168.122.102" port="6789"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:         <host name="192.168.122.101" port="6789"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       </source>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <auth username="openstack">
Nov 23 21:06:41 compute-1 nova_compute[230183]:         <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <target dev="sda" bus="sata"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <interface type="ethernet">
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <mac address="fa:16:3e:6f:7a:f0"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <model type="virtio"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <driver name="vhost" rx_queue_size="512"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <mtu size="1442"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <target dev="tapf23315bc-0f"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     </interface>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <serial type="pty">
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <log file="/var/lib/nova/instances/b88f69cf-a706-408d-8dd0-9c891ac278df/console.log" append="off"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     </serial>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <video>
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <model type="virtio"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     </video>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <input type="tablet" bus="usb"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <rng model="virtio">
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <backend model="random">/dev/urandom</backend>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     </rng>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <controller type="usb" index="0"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     <memballoon model="virtio">
Nov 23 21:06:41 compute-1 nova_compute[230183]:       <stats period="10"/>
Nov 23 21:06:41 compute-1 nova_compute[230183]:     </memballoon>
Nov 23 21:06:41 compute-1 nova_compute[230183]:   </devices>
Nov 23 21:06:41 compute-1 nova_compute[230183]: </domain>
Nov 23 21:06:41 compute-1 nova_compute[230183]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.574 230187 DEBUG nova.compute.manager [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Preparing to wait for external event network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.575 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.575 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.575 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.576 230187 DEBUG nova.virt.libvirt.vif [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:06:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2101370279',display_name='tempest-TestNetworkBasicOps-server-2101370279',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2101370279',id=1,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPkopXsVozaBPjiL+h6NejRz4cW0k9/uA5JpHUVBsNmasGNuNCs7C0SGQ6LkonC2lifS0mLNUtTMnfgtFGQBRj5+CsXOBseSmB+++OQ3W87ZPdTUTnkg9uBrGbnjrus9+A==',key_name='tempest-TestNetworkBasicOps-1595128200',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-97azc21p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:06:33Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=b88f69cf-a706-408d-8dd0-9c891ac278df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "address": "fa:16:3e:6f:7a:f0", "network": {"id": "7aadcd86-30a0-48ed-988a-324cae3af3e6", "bridge": "br-int", "label": "tempest-network-smoke--57523881", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf23315bc-0f", "ovs_interfaceid": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.576 230187 DEBUG nova.network.os_vif_util [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "address": "fa:16:3e:6f:7a:f0", "network": {"id": "7aadcd86-30a0-48ed-988a-324cae3af3e6", "bridge": "br-int", "label": "tempest-network-smoke--57523881", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf23315bc-0f", "ovs_interfaceid": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.577 230187 DEBUG nova.network.os_vif_util [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:7a:f0,bridge_name='br-int',has_traffic_filtering=True,id=f23315bc-0f2d-4e45-91a2-0f72a8929b88,network=Network(7aadcd86-30a0-48ed-988a-324cae3af3e6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf23315bc-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.577 230187 DEBUG os_vif [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:7a:f0,bridge_name='br-int',has_traffic_filtering=True,id=f23315bc-0f2d-4e45-91a2-0f72a8929b88,network=Network(7aadcd86-30a0-48ed-988a-324cae3af3e6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf23315bc-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.625 230187 DEBUG ovsdbapp.backend.ovs_idl [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.626 230187 DEBUG ovsdbapp.backend.ovs_idl [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.626 230187 DEBUG ovsdbapp.backend.ovs_idl [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.626 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.627 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.627 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.628 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.629 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.630 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.640 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.640 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.641 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.641 230187 INFO oslo.privsep.daemon [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp28m6et_r/privsep.sock']
Nov 23 21:06:41 compute-1 nova_compute[230183]: 2025-11-23 21:06:41.767 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:06:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:42.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:42 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c0032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:42 compute-1 nova_compute[230183]: 2025-11-23 21:06:42.405 230187 INFO oslo.privsep.daemon [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Spawned new privsep daemon via rootwrap
Nov 23 21:06:42 compute-1 nova_compute[230183]: 2025-11-23 21:06:42.266 233675 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 23 21:06:42 compute-1 nova_compute[230183]: 2025-11-23 21:06:42.271 233675 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 23 21:06:42 compute-1 nova_compute[230183]: 2025-11-23 21:06:42.274 233675 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Nov 23 21:06:42 compute-1 nova_compute[230183]: 2025-11-23 21:06:42.274 233675 INFO oslo.privsep.daemon [-] privsep daemon running as pid 233675
Nov 23 21:06:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:42 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb1300027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:42 compute-1 ceph-mon[80135]: pgmap v757: 337 pgs: 2 active+clean+snaptrim, 335 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 255 B/s wr, 10 op/s
Nov 23 21:06:42 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1092088785' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:06:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:42.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:42 compute-1 nova_compute[230183]: 2025-11-23 21:06:42.723 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:06:42 compute-1 nova_compute[230183]: 2025-11-23 21:06:42.724 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf23315bc-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:06:42 compute-1 nova_compute[230183]: 2025-11-23 21:06:42.724 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf23315bc-0f, col_values=(('external_ids', {'iface-id': 'f23315bc-0f2d-4e45-91a2-0f72a8929b88', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6f:7a:f0', 'vm-uuid': 'b88f69cf-a706-408d-8dd0-9c891ac278df'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:06:42 compute-1 nova_compute[230183]: 2025-11-23 21:06:42.771 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:06:42 compute-1 NetworkManager[49021]: <info>  [1763932002.7724] manager: (tapf23315bc-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Nov 23 21:06:42 compute-1 nova_compute[230183]: 2025-11-23 21:06:42.773 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:06:42 compute-1 nova_compute[230183]: 2025-11-23 21:06:42.781 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:06:42 compute-1 nova_compute[230183]: 2025-11-23 21:06:42.782 230187 INFO os_vif [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:7a:f0,bridge_name='br-int',has_traffic_filtering=True,id=f23315bc-0f2d-4e45-91a2-0f72a8929b88,network=Network(7aadcd86-30a0-48ed-988a-324cae3af3e6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf23315bc-0f')
Nov 23 21:06:42 compute-1 nova_compute[230183]: 2025-11-23 21:06:42.830 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 23 21:06:42 compute-1 nova_compute[230183]: 2025-11-23 21:06:42.831 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 23 21:06:42 compute-1 nova_compute[230183]: 2025-11-23 21:06:42.832 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No VIF found with MAC fa:16:3e:6f:7a:f0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 23 21:06:42 compute-1 nova_compute[230183]: 2025-11-23 21:06:42.833 230187 INFO nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Using config drive
Nov 23 21:06:42 compute-1 nova_compute[230183]: 2025-11-23 21:06:42.875 230187 DEBUG nova.storage.rbd_utils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image b88f69cf-a706-408d-8dd0-9c891ac278df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:06:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:43 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:43 compute-1 sudo[233699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:06:43 compute-1 sudo[233699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:06:43 compute-1 sudo[233699]: pam_unix(sudo:session): session closed for user root
Nov 23 21:06:43 compute-1 podman[233725]: 2025-11-23 21:06:43.629596418 +0000 UTC m=+0.057059772 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent)
Nov 23 21:06:43 compute-1 podman[233724]: 2025-11-23 21:06:43.660717151 +0000 UTC m=+0.088673329 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 23 21:06:43 compute-1 nova_compute[230183]: 2025-11-23 21:06:43.890 230187 INFO nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Creating config drive at /var/lib/nova/instances/b88f69cf-a706-408d-8dd0-9c891ac278df/disk.config
Nov 23 21:06:43 compute-1 nova_compute[230183]: 2025-11-23 21:06:43.895 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b88f69cf-a706-408d-8dd0-9c891ac278df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp45gqvvj7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:06:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:06:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:44.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:06:44 compute-1 nova_compute[230183]: 2025-11-23 21:06:44.043 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b88f69cf-a706-408d-8dd0-9c891ac278df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp45gqvvj7" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:06:44 compute-1 nova_compute[230183]: 2025-11-23 21:06:44.074 230187 DEBUG nova.storage.rbd_utils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image b88f69cf-a706-408d-8dd0-9c891ac278df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:06:44 compute-1 nova_compute[230183]: 2025-11-23 21:06:44.076 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b88f69cf-a706-408d-8dd0-9c891ac278df/disk.config b88f69cf-a706-408d-8dd0-9c891ac278df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:06:44 compute-1 nova_compute[230183]: 2025-11-23 21:06:44.237 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b88f69cf-a706-408d-8dd0-9c891ac278df/disk.config b88f69cf-a706-408d-8dd0-9c891ac278df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:06:44 compute-1 nova_compute[230183]: 2025-11-23 21:06:44.238 230187 INFO nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Deleting local config drive /var/lib/nova/instances/b88f69cf-a706-408d-8dd0-9c891ac278df/disk.config because it was imported into RBD.
Nov 23 21:06:44 compute-1 systemd[1]: Starting libvirt secret daemon...
Nov 23 21:06:44 compute-1 systemd[1]: Started libvirt secret daemon.
Nov 23 21:06:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:44 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:44 compute-1 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 23 21:06:44 compute-1 kernel: tapf23315bc-0f: entered promiscuous mode
Nov 23 21:06:44 compute-1 NetworkManager[49021]: <info>  [1763932004.3244] manager: (tapf23315bc-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Nov 23 21:06:44 compute-1 ovn_controller[132845]: 2025-11-23T21:06:44Z|00027|binding|INFO|Claiming lport f23315bc-0f2d-4e45-91a2-0f72a8929b88 for this chassis.
Nov 23 21:06:44 compute-1 ovn_controller[132845]: 2025-11-23T21:06:44Z|00028|binding|INFO|f23315bc-0f2d-4e45-91a2-0f72a8929b88: Claiming fa:16:3e:6f:7a:f0 10.100.0.10
Nov 23 21:06:44 compute-1 nova_compute[230183]: 2025-11-23 21:06:44.325 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:06:44 compute-1 nova_compute[230183]: 2025-11-23 21:06:44.331 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:06:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:44.342 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:7a:f0 10.100.0.10'], port_security=['fa:16:3e:6f:7a:f0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b88f69cf-a706-408d-8dd0-9c891ac278df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7aadcd86-30a0-48ed-988a-324cae3af3e6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd2fd5313-3792-44d3-ba44-78e423066c2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33862b60-f5fc-47c1-8327-a9c7a8a97ff8, chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=f23315bc-0f2d-4e45-91a2-0f72a8929b88) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:06:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:44.343 142158 INFO neutron.agent.ovn.metadata.agent [-] Port f23315bc-0f2d-4e45-91a2-0f72a8929b88 in datapath 7aadcd86-30a0-48ed-988a-324cae3af3e6 bound to our chassis
Nov 23 21:06:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:44.344 142158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7aadcd86-30a0-48ed-988a-324cae3af3e6
Nov 23 21:06:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:44.346 142158 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpfyh33cj_/privsep.sock']
Nov 23 21:06:44 compute-1 systemd-udevd[233844]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 21:06:44 compute-1 NetworkManager[49021]: <info>  [1763932004.3725] device (tapf23315bc-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 21:06:44 compute-1 NetworkManager[49021]: <info>  [1763932004.3733] device (tapf23315bc-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 23 21:06:44 compute-1 systemd-machined[193469]: New machine qemu-1-instance-00000001.
Nov 23 21:06:44 compute-1 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Nov 23 21:06:44 compute-1 nova_compute[230183]: 2025-11-23 21:06:44.402 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:06:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:44 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:44 compute-1 ovn_controller[132845]: 2025-11-23T21:06:44Z|00029|binding|INFO|Setting lport f23315bc-0f2d-4e45-91a2-0f72a8929b88 ovn-installed in OVS
Nov 23 21:06:44 compute-1 ovn_controller[132845]: 2025-11-23T21:06:44Z|00030|binding|INFO|Setting lport f23315bc-0f2d-4e45-91a2-0f72a8929b88 up in Southbound
Nov 23 21:06:44 compute-1 nova_compute[230183]: 2025-11-23 21:06:44.412 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:06:44 compute-1 ceph-mon[80135]: pgmap v758: 337 pgs: 2 active+clean+snaptrim, 335 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 255 B/s wr, 1 op/s
Nov 23 21:06:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:44.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:44 compute-1 nova_compute[230183]: 2025-11-23 21:06:44.706 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932004.7056391, b88f69cf-a706-408d-8dd0-9c891ac278df => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:06:44 compute-1 nova_compute[230183]: 2025-11-23 21:06:44.706 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] VM Started (Lifecycle Event)
Nov 23 21:06:44 compute-1 nova_compute[230183]: 2025-11-23 21:06:44.741 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:06:44 compute-1 nova_compute[230183]: 2025-11-23 21:06:44.744 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932004.7096562, b88f69cf-a706-408d-8dd0-9c891ac278df => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:06:44 compute-1 nova_compute[230183]: 2025-11-23 21:06:44.744 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] VM Paused (Lifecycle Event)
Nov 23 21:06:44 compute-1 nova_compute[230183]: 2025-11-23 21:06:44.756 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:06:44 compute-1 nova_compute[230183]: 2025-11-23 21:06:44.759 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 23 21:06:44 compute-1 nova_compute[230183]: 2025-11-23 21:06:44.771 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 23 21:06:45 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:45.027 142158 INFO oslo_service.service [-] Child 233840 exited with status 0
Nov 23 21:06:45 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:45.028 142158 WARNING oslo_service.service [-] pid 233840 not in child list
Nov 23 21:06:45 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:45.032 142158 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 23 21:06:45 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:45.033 142158 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpfyh33cj_/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 23 21:06:45 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:44.910 233901 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 23 21:06:45 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:44.915 233901 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 23 21:06:45 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:44.918 233901 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Nov 23 21:06:45 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:44.918 233901 INFO oslo.privsep.daemon [-] privsep daemon running as pid 233901
Nov 23 21:06:45 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:45.035 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[ab7fda8c-6a1c-41ed-8c2a-af8921a0f52e]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:06:45 compute-1 nova_compute[230183]: 2025-11-23 21:06:45.088 230187 DEBUG nova.compute.manager [req-e17bd455-e319-417e-a0d7-49be30e566db req-1d68124b-289f-4e37-a0a6-794135a5ef38 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received event network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:06:45 compute-1 nova_compute[230183]: 2025-11-23 21:06:45.088 230187 DEBUG oslo_concurrency.lockutils [req-e17bd455-e319-417e-a0d7-49be30e566db req-1d68124b-289f-4e37-a0a6-794135a5ef38 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:06:45 compute-1 nova_compute[230183]: 2025-11-23 21:06:45.089 230187 DEBUG oslo_concurrency.lockutils [req-e17bd455-e319-417e-a0d7-49be30e566db req-1d68124b-289f-4e37-a0a6-794135a5ef38 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:06:45 compute-1 nova_compute[230183]: 2025-11-23 21:06:45.089 230187 DEBUG oslo_concurrency.lockutils [req-e17bd455-e319-417e-a0d7-49be30e566db req-1d68124b-289f-4e37-a0a6-794135a5ef38 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:06:45 compute-1 nova_compute[230183]: 2025-11-23 21:06:45.089 230187 DEBUG nova.compute.manager [req-e17bd455-e319-417e-a0d7-49be30e566db req-1d68124b-289f-4e37-a0a6-794135a5ef38 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Processing event network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 23 21:06:45 compute-1 nova_compute[230183]: 2025-11-23 21:06:45.090 230187 DEBUG nova.compute.manager [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 23 21:06:45 compute-1 nova_compute[230183]: 2025-11-23 21:06:45.093 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 23 21:06:45 compute-1 nova_compute[230183]: 2025-11-23 21:06:45.095 230187 INFO nova.virt.libvirt.driver [-] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Instance spawned successfully.
Nov 23 21:06:45 compute-1 nova_compute[230183]: 2025-11-23 21:06:45.095 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 23 21:06:45 compute-1 nova_compute[230183]: 2025-11-23 21:06:45.101 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932005.1013858, b88f69cf-a706-408d-8dd0-9c891ac278df => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:06:45 compute-1 nova_compute[230183]: 2025-11-23 21:06:45.102 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] VM Resumed (Lifecycle Event)
Nov 23 21:06:45 compute-1 nova_compute[230183]: 2025-11-23 21:06:45.118 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:06:45 compute-1 nova_compute[230183]: 2025-11-23 21:06:45.118 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:06:45 compute-1 nova_compute[230183]: 2025-11-23 21:06:45.119 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:06:45 compute-1 nova_compute[230183]: 2025-11-23 21:06:45.119 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:06:45 compute-1 nova_compute[230183]: 2025-11-23 21:06:45.120 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:06:45 compute-1 nova_compute[230183]: 2025-11-23 21:06:45.120 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:06:45 compute-1 nova_compute[230183]: 2025-11-23 21:06:45.125 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:06:45 compute-1 nova_compute[230183]: 2025-11-23 21:06:45.129 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 23 21:06:45 compute-1 nova_compute[230183]: 2025-11-23 21:06:45.149 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 23 21:06:45 compute-1 nova_compute[230183]: 2025-11-23 21:06:45.173 230187 INFO nova.compute.manager [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Took 12.10 seconds to spawn the instance on the hypervisor.
Nov 23 21:06:45 compute-1 nova_compute[230183]: 2025-11-23 21:06:45.174 230187 DEBUG nova.compute.manager [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:06:45 compute-1 nova_compute[230183]: 2025-11-23 21:06:45.234 230187 INFO nova.compute.manager [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Took 13.03 seconds to build instance.
Nov 23 21:06:45 compute-1 nova_compute[230183]: 2025-11-23 21:06:45.261 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:06:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:45 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:45 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:45.847 233901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:06:45 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:45.847 233901 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:06:45 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:45.847 233901 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:06:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:46.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:46 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:46 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:46 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:06:46 compute-1 ceph-mon[80135]: pgmap v759: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.6 MiB/s wr, 50 op/s
Nov 23 21:06:46 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:46.634 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[b01960fe-cfa7-4186-9d49-0519c4438fce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:06:46 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:46.635 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7aadcd86-31 in ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 23 21:06:46 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:46.637 233901 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7aadcd86-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 23 21:06:46 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:46.637 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[b99ab6b0-5ac6-4556-840d-f064d5ab9107]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:06:46 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:46.641 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[95453304-76f1-4c5b-805e-8ddaf2b9e254]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:06:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:46.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:46 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:46.662 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[7d1925a6-db4b-4a59-923f-671b185b2025]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:06:46 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:46.695 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5fa8efde-9f7a-4ce9-aaa5-2c04746bdba5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:06:46 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:46.697 142158 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpuwkaw7zn/privsep.sock']
Nov 23 21:06:46 compute-1 nova_compute[230183]: 2025-11-23 21:06:46.769 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:06:47 compute-1 nova_compute[230183]: 2025-11-23 21:06:47.294 230187 DEBUG nova.compute.manager [req-7243a388-ad7d-4362-ac71-e20f4a11bc80 req-8fdc5a8a-01d3-4d75-a897-bc221ac05d45 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received event network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:06:47 compute-1 nova_compute[230183]: 2025-11-23 21:06:47.295 230187 DEBUG oslo_concurrency.lockutils [req-7243a388-ad7d-4362-ac71-e20f4a11bc80 req-8fdc5a8a-01d3-4d75-a897-bc221ac05d45 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:06:47 compute-1 nova_compute[230183]: 2025-11-23 21:06:47.295 230187 DEBUG oslo_concurrency.lockutils [req-7243a388-ad7d-4362-ac71-e20f4a11bc80 req-8fdc5a8a-01d3-4d75-a897-bc221ac05d45 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:06:47 compute-1 nova_compute[230183]: 2025-11-23 21:06:47.295 230187 DEBUG oslo_concurrency.lockutils [req-7243a388-ad7d-4362-ac71-e20f4a11bc80 req-8fdc5a8a-01d3-4d75-a897-bc221ac05d45 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:06:47 compute-1 nova_compute[230183]: 2025-11-23 21:06:47.296 230187 DEBUG nova.compute.manager [req-7243a388-ad7d-4362-ac71-e20f4a11bc80 req-8fdc5a8a-01d3-4d75-a897-bc221ac05d45 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] No waiting events found dispatching network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:06:47 compute-1 nova_compute[230183]: 2025-11-23 21:06:47.296 230187 WARNING nova.compute.manager [req-7243a388-ad7d-4362-ac71-e20f4a11bc80 req-8fdc5a8a-01d3-4d75-a897-bc221ac05d45 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received unexpected event network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 for instance with vm_state active and task_state None.
Nov 23 21:06:47 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:47.444 142158 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 23 21:06:47 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:47.444 142158 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpuwkaw7zn/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 23 21:06:47 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:47.313 233916 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 23 21:06:47 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:47.318 233916 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 23 21:06:47 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:47.320 233916 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 23 21:06:47 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:47.321 233916 INFO oslo.privsep.daemon [-] privsep daemon running as pid 233916
Nov 23 21:06:47 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:47.447 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[f54965c4-26e2-4720-b1a8-08bd6e8a4333]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:06:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:47 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:47 compute-1 nova_compute[230183]: 2025-11-23 21:06:47.772 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:06:47 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:47.968 233916 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:06:47 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:47.968 233916 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:06:47 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:47.969 233916 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:06:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:48.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:48 compute-1 nova_compute[230183]: 2025-11-23 21:06:48.195 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:06:48 compute-1 NetworkManager[49021]: <info>  [1763932008.1963] manager: (patch-br-int-to-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/25)
Nov 23 21:06:48 compute-1 NetworkManager[49021]: <info>  [1763932008.1971] device (patch-br-int-to-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 21:06:48 compute-1 NetworkManager[49021]: <info>  [1763932008.1986] manager: (patch-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/26)
Nov 23 21:06:48 compute-1 NetworkManager[49021]: <info>  [1763932008.1991] device (patch-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 21:06:48 compute-1 NetworkManager[49021]: <info>  [1763932008.2003] manager: (patch-br-int-to-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Nov 23 21:06:48 compute-1 NetworkManager[49021]: <info>  [1763932008.2011] manager: (patch-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Nov 23 21:06:48 compute-1 NetworkManager[49021]: <info>  [1763932008.2016] device (patch-br-int-to-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 23 21:06:48 compute-1 NetworkManager[49021]: <info>  [1763932008.2021] device (patch-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 23 21:06:48 compute-1 nova_compute[230183]: 2025-11-23 21:06:48.222 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:06:48 compute-1 nova_compute[230183]: 2025-11-23 21:06:48.226 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:06:48 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:48 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:48 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:48 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:48 compute-1 ceph-mon[80135]: pgmap v760: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 41 op/s
Nov 23 21:06:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.574 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[7ace8fd5-de8b-442a-93af-e11be78fa481]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:06:48 compute-1 NetworkManager[49021]: <info>  [1763932008.5944] manager: (tap7aadcd86-30): new Veth device (/org/freedesktop/NetworkManager/Devices/29)
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.593 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[53201a1d-d6aa-431a-9364-0fb4324ebeca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:06:48 compute-1 systemd-udevd[233930]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.626 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[cdeb26e7-dc06-4f2c-9e94-1d44839cd664]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.629 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[040e1525-9226-494a-832b-c16a9c18b9d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:06:48 compute-1 NetworkManager[49021]: <info>  [1763932008.6545] device (tap7aadcd86-30): carrier: link connected
Nov 23 21:06:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:06:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:48.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.663 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[d10e78fa-08ce-4641-99da-10af232102c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.679 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[1813bdf2-3375-4555-ab3d-8323f2b05726]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7aadcd86-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:e4:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 395722, 'reachable_time': 27230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 233948, 'error': None, 'target': 'ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.693 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[3f47b221-ed6a-4cdc-b216-41a9d9c3a897]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8e:e450'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 395722, 'tstamp': 395722}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 233949, 'error': None, 'target': 'ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.708 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[1026876d-29cb-4255-b27b-135b2bb29d63]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7aadcd86-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:e4:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 395722, 'reachable_time': 27230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 233950, 'error': None, 'target': 'ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.736 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[0e18c4f3-362a-4cb7-a321-aeff69e73350]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.794 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[effda4c6-5c13-4608-a25e-c3af8849e8ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.796 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7aadcd86-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.797 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.797 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7aadcd86-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:06:48 compute-1 kernel: tap7aadcd86-30: entered promiscuous mode
Nov 23 21:06:48 compute-1 NetworkManager[49021]: <info>  [1763932008.8000] manager: (tap7aadcd86-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Nov 23 21:06:48 compute-1 nova_compute[230183]: 2025-11-23 21:06:48.799 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:06:48 compute-1 nova_compute[230183]: 2025-11-23 21:06:48.801 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.803 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7aadcd86-30, col_values=(('external_ids', {'iface-id': 'aeecf50b-036b-450d-8620-c40267ec9fc6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:06:48 compute-1 ovn_controller[132845]: 2025-11-23T21:06:48Z|00031|binding|INFO|Releasing lport aeecf50b-036b-450d-8620-c40267ec9fc6 from this chassis (sb_readonly=0)
Nov 23 21:06:48 compute-1 nova_compute[230183]: 2025-11-23 21:06:48.804 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:06:48 compute-1 nova_compute[230183]: 2025-11-23 21:06:48.829 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:06:48 compute-1 nova_compute[230183]: 2025-11-23 21:06:48.832 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.835 142158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7aadcd86-30a0-48ed-988a-324cae3af3e6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7aadcd86-30a0-48ed-988a-324cae3af3e6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.837 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[39776a6d-49b9-46e2-9097-5dd76e7fb27c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.839 142158 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]: global
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]:     log         /dev/log local0 debug
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]:     log-tag     haproxy-metadata-proxy-7aadcd86-30a0-48ed-988a-324cae3af3e6
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]:     user        root
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]:     group       root
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]:     maxconn     1024
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]:     pidfile     /var/lib/neutron/external/pids/7aadcd86-30a0-48ed-988a-324cae3af3e6.pid.haproxy
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]:     daemon
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]: defaults
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]:     log global
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]:     mode http
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]:     option httplog
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]:     option dontlognull
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]:     option http-server-close
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]:     option forwardfor
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]:     retries                 3
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]:     timeout http-request    30s
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]:     timeout connect         30s
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]:     timeout client          32s
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]:     timeout server          32s
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]:     timeout http-keep-alive 30s
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]: listen listener
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]:     bind 169.254.169.254:80
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]:     server metadata /var/lib/neutron/metadata_proxy
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]:     http-request add-header X-OVN-Network-ID 7aadcd86-30a0-48ed-988a-324cae3af3e6
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 23 21:06:48 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.840 142158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6', 'env', 'PROCESS_TAG=haproxy-7aadcd86-30a0-48ed-988a-324cae3af3e6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7aadcd86-30a0-48ed-988a-324cae3af3e6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 23 21:06:48 compute-1 nova_compute[230183]: 2025-11-23 21:06:48.935 230187 DEBUG nova.compute.manager [req-72284170-e7fb-47fd-bcb2-8ff1815a4897 req-1db0de17-ee36-48ab-bc99-32a7bcb05de1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received event network-changed-f23315bc-0f2d-4e45-91a2-0f72a8929b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:06:48 compute-1 nova_compute[230183]: 2025-11-23 21:06:48.935 230187 DEBUG nova.compute.manager [req-72284170-e7fb-47fd-bcb2-8ff1815a4897 req-1db0de17-ee36-48ab-bc99-32a7bcb05de1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Refreshing instance network info cache due to event network-changed-f23315bc-0f2d-4e45-91a2-0f72a8929b88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 23 21:06:48 compute-1 nova_compute[230183]: 2025-11-23 21:06:48.935 230187 DEBUG oslo_concurrency.lockutils [req-72284170-e7fb-47fd-bcb2-8ff1815a4897 req-1db0de17-ee36-48ab-bc99-32a7bcb05de1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:06:48 compute-1 nova_compute[230183]: 2025-11-23 21:06:48.936 230187 DEBUG oslo_concurrency.lockutils [req-72284170-e7fb-47fd-bcb2-8ff1815a4897 req-1db0de17-ee36-48ab-bc99-32a7bcb05de1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:06:48 compute-1 nova_compute[230183]: 2025-11-23 21:06:48.936 230187 DEBUG nova.network.neutron [req-72284170-e7fb-47fd-bcb2-8ff1815a4897 req-1db0de17-ee36-48ab-bc99-32a7bcb05de1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Refreshing network info cache for port f23315bc-0f2d-4e45-91a2-0f72a8929b88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 23 21:06:49 compute-1 podman[233982]: 2025-11-23 21:06:49.232374829 +0000 UTC m=+0.047461497 container create 788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 23 21:06:49 compute-1 systemd[1]: Started libpod-conmon-788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6.scope.
Nov 23 21:06:49 compute-1 systemd[1]: Started libcrun container.
Nov 23 21:06:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c5263137e4ec5459bb96e030a6c0c4608606faf4bf3953f654301e7654b612/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 21:06:49 compute-1 podman[233982]: 2025-11-23 21:06:49.294172852 +0000 UTC m=+0.109259530 container init 788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 21:06:49 compute-1 podman[233982]: 2025-11-23 21:06:49.299937132 +0000 UTC m=+0.115023800 container start 788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 23 21:06:49 compute-1 podman[233982]: 2025-11-23 21:06:49.206710498 +0000 UTC m=+0.021797186 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 21:06:49 compute-1 podman[233995]: 2025-11-23 21:06:49.338290696 +0000 UTC m=+0.066905536 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3)
Nov 23 21:06:49 compute-1 neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6[233999]: [NOTICE]   (234013) : New worker (234022) forked
Nov 23 21:06:49 compute-1 neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6[233999]: [NOTICE]   (234013) : Loading success.
Nov 23 21:06:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:49 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:50.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:50 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:50 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:50 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 e150: 3 total, 3 up, 3 in
Nov 23 21:06:50 compute-1 ceph-mon[80135]: pgmap v761: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 41 op/s
Nov 23 21:06:50 compute-1 ceph-mon[80135]: osdmap e150: 3 total, 3 up, 3 in
Nov 23 21:06:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:50.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:50 compute-1 nova_compute[230183]: 2025-11-23 21:06:50.955 230187 DEBUG nova.network.neutron [req-72284170-e7fb-47fd-bcb2-8ff1815a4897 req-1db0de17-ee36-48ab-bc99-32a7bcb05de1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Updated VIF entry in instance network info cache for port f23315bc-0f2d-4e45-91a2-0f72a8929b88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 23 21:06:50 compute-1 nova_compute[230183]: 2025-11-23 21:06:50.955 230187 DEBUG nova.network.neutron [req-72284170-e7fb-47fd-bcb2-8ff1815a4897 req-1db0de17-ee36-48ab-bc99-32a7bcb05de1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Updating instance_info_cache with network_info: [{"id": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "address": "fa:16:3e:6f:7a:f0", "network": {"id": "7aadcd86-30a0-48ed-988a-324cae3af3e6", "bridge": "br-int", "label": "tempest-network-smoke--57523881", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf23315bc-0f", "ovs_interfaceid": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:06:50 compute-1 nova_compute[230183]: 2025-11-23 21:06:50.976 230187 DEBUG oslo_concurrency.lockutils [req-72284170-e7fb-47fd-bcb2-8ff1815a4897 req-1db0de17-ee36-48ab-bc99-32a7bcb05de1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:06:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:51.063 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:06:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:51.063 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:06:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:06:51.064 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:06:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:51 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:51 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:06:51 compute-1 nova_compute[230183]: 2025-11-23 21:06:51.772 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:06:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:06:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:52.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:06:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:52 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:52 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:52 compute-1 ceph-mon[80135]: pgmap v763: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Nov 23 21:06:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:52.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:52 compute-1 nova_compute[230183]: 2025-11-23 21:06:52.823 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:06:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:53 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb1300027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:54.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:54 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:54 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:54 compute-1 ceph-mon[80135]: pgmap v764: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Nov 23 21:06:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:54.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:55 compute-1 nova_compute[230183]: 2025-11-23 21:06:55.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:06:55 compute-1 nova_compute[230183]: 2025-11-23 21:06:55.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 23 21:06:55 compute-1 nova_compute[230183]: 2025-11-23 21:06:55.452 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 23 21:06:55 compute-1 nova_compute[230183]: 2025-11-23 21:06:55.453 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:06:55 compute-1 nova_compute[230183]: 2025-11-23 21:06:55.453 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 23 21:06:55 compute-1 nova_compute[230183]: 2025-11-23 21:06:55.466 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:06:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:55 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:56.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:56 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb1300027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:56 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb1180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:56 compute-1 nova_compute[230183]: 2025-11-23 21:06:56.482 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:06:56 compute-1 nova_compute[230183]: 2025-11-23 21:06:56.483 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 23 21:06:56 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:06:56 compute-1 ceph-mon[80135]: pgmap v765: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 102 B/s wr, 80 op/s
Nov 23 21:06:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:56.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:56 compute-1 nova_compute[230183]: 2025-11-23 21:06:56.774 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:06:57 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 23 21:06:57 compute-1 nova_compute[230183]: 2025-11-23 21:06:57.423 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:06:57 compute-1 nova_compute[230183]: 2025-11-23 21:06:57.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:06:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:57 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:57 compute-1 nova_compute[230183]: 2025-11-23 21:06:57.880 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:06:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:58.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:58 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:58 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb1300027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:06:58 compute-1 nova_compute[230183]: 2025-11-23 21:06:58.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:06:58 compute-1 nova_compute[230183]: 2025-11-23 21:06:58.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 23 21:06:58 compute-1 nova_compute[230183]: 2025-11-23 21:06:58.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 23 21:06:58 compute-1 ceph-mon[80135]: pgmap v766: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 102 B/s wr, 80 op/s
Nov 23 21:06:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:06:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:06:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:58.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:06:58 compute-1 nova_compute[230183]: 2025-11-23 21:06:58.913 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:06:58 compute-1 nova_compute[230183]: 2025-11-23 21:06:58.913 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquired lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:06:58 compute-1 nova_compute[230183]: 2025-11-23 21:06:58.913 230187 DEBUG nova.network.neutron [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 23 21:06:58 compute-1 nova_compute[230183]: 2025-11-23 21:06:58.913 230187 DEBUG nova.objects.instance [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lazy-loading 'info_cache' on Instance uuid b88f69cf-a706-408d-8dd0-9c891ac278df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:06:59 compute-1 ovn_controller[132845]: 2025-11-23T21:06:59Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6f:7a:f0 10.100.0.10
Nov 23 21:06:59 compute-1 ovn_controller[132845]: 2025-11-23T21:06:59Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6f:7a:f0 10.100.0.10
Nov 23 21:06:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:59 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb1180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:00.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:00 compute-1 nova_compute[230183]: 2025-11-23 21:07:00.201 230187 DEBUG nova.network.neutron [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Updating instance_info_cache with network_info: [{"id": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "address": "fa:16:3e:6f:7a:f0", "network": {"id": "7aadcd86-30a0-48ed-988a-324cae3af3e6", "bridge": "br-int", "label": "tempest-network-smoke--57523881", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf23315bc-0f", "ovs_interfaceid": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:07:00 compute-1 nova_compute[230183]: 2025-11-23 21:07:00.225 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Releasing lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:07:00 compute-1 nova_compute[230183]: 2025-11-23 21:07:00.225 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 23 21:07:00 compute-1 nova_compute[230183]: 2025-11-23 21:07:00.226 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:07:00 compute-1 nova_compute[230183]: 2025-11-23 21:07:00.227 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:07:00 compute-1 nova_compute[230183]: 2025-11-23 21:07:00.227 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:07:00 compute-1 nova_compute[230183]: 2025-11-23 21:07:00.227 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:07:00 compute-1 nova_compute[230183]: 2025-11-23 21:07:00.271 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:07:00 compute-1 nova_compute[230183]: 2025-11-23 21:07:00.272 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:07:00 compute-1 nova_compute[230183]: 2025-11-23 21:07:00.272 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:07:00 compute-1 nova_compute[230183]: 2025-11-23 21:07:00.272 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:07:00 compute-1 nova_compute[230183]: 2025-11-23 21:07:00.272 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:07:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:00 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003ce0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:00 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:00 compute-1 ceph-mon[80135]: pgmap v767: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 102 B/s wr, 80 op/s
Nov 23 21:07:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:07:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:00.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:07:00 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:07:00 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2909697320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:07:00 compute-1 nova_compute[230183]: 2025-11-23 21:07:00.715 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:07:00 compute-1 nova_compute[230183]: 2025-11-23 21:07:00.781 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 23 21:07:00 compute-1 nova_compute[230183]: 2025-11-23 21:07:00.782 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 23 21:07:00 compute-1 nova_compute[230183]: 2025-11-23 21:07:00.928 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:07:00 compute-1 nova_compute[230183]: 2025-11-23 21:07:00.929 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4835MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:07:00 compute-1 nova_compute[230183]: 2025-11-23 21:07:00.929 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:07:00 compute-1 nova_compute[230183]: 2025-11-23 21:07:00.929 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:07:01 compute-1 nova_compute[230183]: 2025-11-23 21:07:01.050 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Instance b88f69cf-a706-408d-8dd0-9c891ac278df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 23 21:07:01 compute-1 nova_compute[230183]: 2025-11-23 21:07:01.051 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:07:01 compute-1 nova_compute[230183]: 2025-11-23 21:07:01.051 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:07:01 compute-1 nova_compute[230183]: 2025-11-23 21:07:01.121 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing inventories for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 23 21:07:01 compute-1 nova_compute[230183]: 2025-11-23 21:07:01.206 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updating ProviderTree inventory for provider bb217351-d4c8-44a4-9137-08393a1f72bc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 23 21:07:01 compute-1 nova_compute[230183]: 2025-11-23 21:07:01.207 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updating inventory in ProviderTree for provider bb217351-d4c8-44a4-9137-08393a1f72bc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 23 21:07:01 compute-1 nova_compute[230183]: 2025-11-23 21:07:01.231 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing aggregate associations for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 23 21:07:01 compute-1 nova_compute[230183]: 2025-11-23 21:07:01.258 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing trait associations for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc, traits: COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,HW_CPU_X86_AVX,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SVM,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 23 21:07:01 compute-1 nova_compute[230183]: 2025-11-23 21:07:01.298 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:07:01 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:01 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:01 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:07:01 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2909697320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:07:01 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/705310154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:07:01 compute-1 ceph-mon[80135]: pgmap v768: 337 pgs: 337 active+clean; 118 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 366 KiB/s rd, 2.4 MiB/s wr, 71 op/s
Nov 23 21:07:01 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:07:01 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3596254833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:07:01 compute-1 nova_compute[230183]: 2025-11-23 21:07:01.739 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:07:01 compute-1 nova_compute[230183]: 2025-11-23 21:07:01.746 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updating inventory in ProviderTree for provider bb217351-d4c8-44a4-9137-08393a1f72bc with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 23 21:07:01 compute-1 nova_compute[230183]: 2025-11-23 21:07:01.777 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:01 compute-1 nova_compute[230183]: 2025-11-23 21:07:01.786 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updated inventory for provider bb217351-d4c8-44a4-9137-08393a1f72bc with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 23 21:07:01 compute-1 nova_compute[230183]: 2025-11-23 21:07:01.786 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updating resource provider bb217351-d4c8-44a4-9137-08393a1f72bc generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 23 21:07:01 compute-1 nova_compute[230183]: 2025-11-23 21:07:01.786 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updating inventory in ProviderTree for provider bb217351-d4c8-44a4-9137-08393a1f72bc with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 23 21:07:01 compute-1 nova_compute[230183]: 2025-11-23 21:07:01.811 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 23 21:07:01 compute-1 nova_compute[230183]: 2025-11-23 21:07:01.811 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:07:02 compute-1 nova_compute[230183]: 2025-11-23 21:07:02.012 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:07:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:02.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:02 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb1180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:02 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:02 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3596254833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:07:02 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3645982558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:07:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:07:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:02.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:07:02 compute-1 nova_compute[230183]: 2025-11-23 21:07:02.918 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:03 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:03 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:03 compute-1 sudo[234090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:07:03 compute-1 sudo[234090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:07:03 compute-1 sudo[234090]: pam_unix(sudo:session): session closed for user root
Nov 23 21:07:03 compute-1 ceph-mon[80135]: pgmap v769: 337 pgs: 337 active+clean; 118 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 23 21:07:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:07:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:07:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:04.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:07:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:04 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb114000d20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:04 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:04 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:04.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:04 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/984064645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:07:05 compute-1 nova_compute[230183]: 2025-11-23 21:07:05.091 230187 INFO nova.compute.manager [None req-4b9dad6f-8c27-41f2-9c7e-8ab6bb5d4d3b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Get console output
Nov 23 21:07:05 compute-1 nova_compute[230183]: 2025-11-23 21:07:05.096 230187 INFO oslo.privsep.daemon [None req-4b9dad6f-8c27-41f2-9c7e-8ab6bb5d4d3b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpmmlmp56h/privsep.sock']
Nov 23 21:07:05 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:05 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003d20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:05 compute-1 ceph-mon[80135]: pgmap v770: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 23 21:07:05 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3414325586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:07:05 compute-1 nova_compute[230183]: 2025-11-23 21:07:05.758 230187 INFO oslo.privsep.daemon [None req-4b9dad6f-8c27-41f2-9c7e-8ab6bb5d4d3b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Spawned new privsep daemon via rootwrap
Nov 23 21:07:05 compute-1 nova_compute[230183]: 2025-11-23 21:07:05.639 234120 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 23 21:07:05 compute-1 nova_compute[230183]: 2025-11-23 21:07:05.642 234120 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 23 21:07:05 compute-1 nova_compute[230183]: 2025-11-23 21:07:05.644 234120 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 23 21:07:05 compute-1 nova_compute[230183]: 2025-11-23 21:07:05.644 234120 INFO oslo.privsep.daemon [-] privsep daemon running as pid 234120
Nov 23 21:07:05 compute-1 nova_compute[230183]: 2025-11-23 21:07:05.853 234120 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 23 21:07:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:06.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:06 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:06 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb114001840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:06 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:07:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:06.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:06 compute-1 nova_compute[230183]: 2025-11-23 21:07:06.779 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:07 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:07 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:07 compute-1 nova_compute[230183]: 2025-11-23 21:07:07.959 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:08.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:08 compute-1 ceph-mon[80135]: pgmap v771: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 23 21:07:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/1035460735' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:07:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/1035460735' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:07:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:08 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003d40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:08 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:08 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:08.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:09 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:09 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb114001840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:10.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:10 compute-1 ceph-mon[80135]: pgmap v772: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 23 21:07:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:10 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:10 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:10 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:07:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:10.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:07:11 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:11.218 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:07:11 compute-1 nova_compute[230183]: 2025-11-23 21:07:11.219 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:11 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:11.220 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 23 21:07:11 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:11 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:11 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:07:11 compute-1 nova_compute[230183]: 2025-11-23 21:07:11.781 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:12.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:12 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb114001840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:12 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:12 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:12 compute-1 ceph-mon[80135]: pgmap v773: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 23 21:07:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:12.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:12 compute-1 nova_compute[230183]: 2025-11-23 21:07:12.962 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:13 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:13 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:07:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:14.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:07:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:14 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:14 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:14 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb114002cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:14 compute-1 ceph-mon[80135]: pgmap v774: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 15 KiB/s wr, 1 op/s
Nov 23 21:07:14 compute-1 podman[234127]: 2025-11-23 21:07:14.65636057 +0000 UTC m=+0.061196937 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 23 21:07:14 compute-1 podman[234126]: 2025-11-23 21:07:14.693772198 +0000 UTC m=+0.098378898 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true)
Nov 23 21:07:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:14.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:15 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:15 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:16.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:16 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:16 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:16 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb114002cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:16 compute-1 ceph-mon[80135]: pgmap v775: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 18 KiB/s wr, 1 op/s
Nov 23 21:07:16 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:07:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:16.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:16 compute-1 nova_compute[230183]: 2025-11-23 21:07:16.782 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:17 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:17 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:17 compute-1 nova_compute[230183]: 2025-11-23 21:07:17.965 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:18.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:18 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:18.222 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:07:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:18 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:18 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:18 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:18 compute-1 ceph-mon[80135]: pgmap v776: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 15 KiB/s wr, 1 op/s
Nov 23 21:07:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:07:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:18.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:19 compute-1 sudo[234174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:07:19 compute-1 sudo[234174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:07:19 compute-1 sudo[234174]: pam_unix(sudo:session): session closed for user root
Nov 23 21:07:19 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:19 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb114002cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:19 compute-1 sudo[234205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 21:07:19 compute-1 sudo[234205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:07:19 compute-1 podman[234198]: 2025-11-23 21:07:19.504889755 +0000 UTC m=+0.069495468 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 23 21:07:20 compute-1 sudo[234205]: pam_unix(sudo:session): session closed for user root
Nov 23 21:07:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:07:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:20.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:07:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:20 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:20 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:20 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:20 compute-1 ceph-mon[80135]: pgmap v777: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 15 KiB/s wr, 1 op/s
Nov 23 21:07:20 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1983814822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:07:20 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:07:20 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 21:07:20 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:07:20 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:07:20 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 21:07:20 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 21:07:20 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:07:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:20.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:21 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:21 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:21 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:07:21 compute-1 nova_compute[230183]: 2025-11-23 21:07:21.785 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:22.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:22 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb114003dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:22 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:22 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:22 compute-1 ceph-mon[80135]: pgmap v778: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 15 KiB/s wr, 1 op/s
Nov 23 21:07:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:07:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:22.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:07:22 compute-1 nova_compute[230183]: 2025-11-23 21:07:22.968 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:23 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:23 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:23 compute-1 sudo[234277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:07:23 compute-1 sudo[234277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:07:23 compute-1 sudo[234277]: pam_unix(sudo:session): session closed for user root
Nov 23 21:07:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:24.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:24 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb114003dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:24 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:24 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:24 compute-1 ceph-mon[80135]: pgmap v779: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 3.3 KiB/s wr, 0 op/s
Nov 23 21:07:24 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2482210232' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:07:24 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:07:24 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:07:24 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2495490603' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:07:24 compute-1 sudo[234302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 21:07:24 compute-1 sudo[234302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:07:24 compute-1 sudo[234302]: pam_unix(sudo:session): session closed for user root
Nov 23 21:07:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:24.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:25 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:25 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:07:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:26.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:07:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:26 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:26 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:26 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:26 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:07:26 compute-1 ceph-mon[80135]: pgmap v780: 337 pgs: 337 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 23 21:07:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:07:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:26.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:07:26 compute-1 nova_compute[230183]: 2025-11-23 21:07:26.788 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:27 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:27 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:27 compute-1 nova_compute[230183]: 2025-11-23 21:07:27.988 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:07:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:28.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:07:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:28 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:28 compute-1 ceph-mon[80135]: pgmap v781: 337 pgs: 337 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 23 21:07:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:28.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:29 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:29 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:30.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:30 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:30 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:30 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:30 compute-1 ceph-mon[80135]: pgmap v782: 337 pgs: 337 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 23 21:07:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:07:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:30.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:07:31 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:31 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:31 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:07:31 compute-1 nova_compute[230183]: 2025-11-23 21:07:31.790 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:32.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:32 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:32 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb130000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:32 compute-1 ceph-mon[80135]: pgmap v783: 337 pgs: 337 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Nov 23 21:07:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:32.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:33 compute-1 nova_compute[230183]: 2025-11-23 21:07:33.035 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:33 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:33 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:07:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:34.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:34 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:34 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120002a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:34 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:34 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:34 compute-1 nova_compute[230183]: 2025-11-23 21:07:34.605 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:34 compute-1 ceph-mon[80135]: pgmap v784: 337 pgs: 337 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Nov 23 21:07:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:34.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:35 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:35 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb130000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:36.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:36 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:36 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:36 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120002a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:36 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:07:36 compute-1 ceph-mon[80135]: pgmap v785: 337 pgs: 337 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Nov 23 21:07:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:36.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:36 compute-1 nova_compute[230183]: 2025-11-23 21:07:36.792 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:37 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:37 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:38 compute-1 nova_compute[230183]: 2025-11-23 21:07:38.080 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:07:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:38.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:07:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:38 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:38 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:38 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:38 compute-1 ceph-mon[80135]: pgmap v786: 337 pgs: 337 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.659924) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932058659962, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 897, "num_deletes": 251, "total_data_size": 1859380, "memory_usage": 1888000, "flush_reason": "Manual Compaction"}
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932058672151, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 1227461, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25232, "largest_seqno": 26124, "table_properties": {"data_size": 1223306, "index_size": 1871, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9656, "raw_average_key_size": 19, "raw_value_size": 1214752, "raw_average_value_size": 2504, "num_data_blocks": 83, "num_entries": 485, "num_filter_entries": 485, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763931996, "oldest_key_time": 1763931996, "file_creation_time": 1763932058, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 12315 microseconds, and 3973 cpu microseconds.
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.672222) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 1227461 bytes OK
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.672266) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.674320) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.674336) EVENT_LOG_v1 {"time_micros": 1763932058674332, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.674354) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1854806, prev total WAL file size 1854806, number of live WAL files 2.
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.675047) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(1198KB)], [48(13MB)]
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932058675113, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 15195643, "oldest_snapshot_seqno": -1}
Nov 23 21:07:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:38.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 5389 keys, 13042783 bytes, temperature: kUnknown
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932058767380, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 13042783, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13007149, "index_size": 21060, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 138501, "raw_average_key_size": 25, "raw_value_size": 12909950, "raw_average_value_size": 2395, "num_data_blocks": 855, "num_entries": 5389, "num_filter_entries": 5389, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763932058, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.767597) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 13042783 bytes
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.776961) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.6 rd, 141.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 13.3 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(23.0) write-amplify(10.6) OK, records in: 5909, records dropped: 520 output_compression: NoCompression
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.776995) EVENT_LOG_v1 {"time_micros": 1763932058776979, "job": 28, "event": "compaction_finished", "compaction_time_micros": 92336, "compaction_time_cpu_micros": 48916, "output_level": 6, "num_output_files": 1, "total_output_size": 13042783, "num_input_records": 5909, "num_output_records": 5389, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932058777439, "job": 28, "event": "table_file_deletion", "file_number": 50}
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932058779487, "job": 28, "event": "table_file_deletion", "file_number": 48}
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.674926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.779612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.779619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.779622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.779624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:07:38 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.779627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:07:39 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:39 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120002a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:39 compute-1 ceph-mon[80135]: pgmap v787: 337 pgs: 337 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 23 21:07:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:07:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:40.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:07:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:40 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb130002100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:40 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:40 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:40.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:41 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:41 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:41 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:07:41 compute-1 nova_compute[230183]: 2025-11-23 21:07:41.794 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:07:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:42.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:07:42 compute-1 ceph-mon[80135]: pgmap v788: 337 pgs: 337 active+clean; 177 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 84 op/s
Nov 23 21:07:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:42 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:42 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:42 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:07:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:42.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:07:43 compute-1 nova_compute[230183]: 2025-11-23 21:07:43.082 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:43 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:43 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb130002a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:43 compute-1 sudo[234338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:07:43 compute-1 sudo[234338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:07:43 compute-1 sudo[234338]: pam_unix(sudo:session): session closed for user root
Nov 23 21:07:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:07:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:44.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:07:44 compute-1 ceph-mon[80135]: pgmap v789: 337 pgs: 337 active+clean; 177 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 335 KiB/s rd, 1020 KiB/s wr, 20 op/s
Nov 23 21:07:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:44 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:44 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:44 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:44.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:45 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:45 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb130002a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:45 compute-1 podman[234365]: 2025-11-23 21:07:45.654426975 +0000 UTC m=+0.059459749 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 23 21:07:45 compute-1 podman[234364]: 2025-11-23 21:07:45.69462676 +0000 UTC m=+0.095588301 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller)
Nov 23 21:07:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:07:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:46.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:07:46 compute-1 ceph-mon[80135]: pgmap v790: 337 pgs: 337 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 653 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Nov 23 21:07:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:46 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003f70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:46 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:46 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:46 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:07:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:46.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:46 compute-1 nova_compute[230183]: 2025-11-23 21:07:46.797 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:47 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:47 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:48.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:48 compute-1 nova_compute[230183]: 2025-11-23 21:07:48.146 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:48 compute-1 ceph-mon[80135]: pgmap v791: 337 pgs: 337 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 23 21:07:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:07:48 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:48 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:48 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:48 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb130002a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:48.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:49 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:49 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003f90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:49 compute-1 podman[234411]: 2025-11-23 21:07:49.664773542 +0000 UTC m=+0.076437051 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 23 21:07:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:07:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:50.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:07:50 compute-1 ceph-mon[80135]: pgmap v792: 337 pgs: 337 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 23 21:07:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:50 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:50 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:50 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:50.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:51.064 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:07:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:51.065 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:07:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:51.065 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:07:51 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:51 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:51 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:07:51 compute-1 nova_compute[230183]: 2025-11-23 21:07:51.800 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:52.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:52 compute-1 ceph-mon[80135]: pgmap v793: 337 pgs: 337 active+clean; 163 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.2 MiB/s wr, 73 op/s
Nov 23 21:07:52 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2132313810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:07:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:52 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:52 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:52 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb130003b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:52.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:53 compute-1 nova_compute[230183]: 2025-11-23 21:07:53.149 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:53 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1725535026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:07:53 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:53 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:54.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:54 compute-1 ceph-mon[80135]: pgmap v794: 337 pgs: 337 active+clean; 163 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 321 KiB/s rd, 1.2 MiB/s wr, 63 op/s
Nov 23 21:07:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:54 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003fd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:54 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:54 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:54.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:54 compute-1 ovn_controller[132845]: 2025-11-23T21:07:54Z|00032|binding|INFO|Releasing lport aeecf50b-036b-450d-8620-c40267ec9fc6 from this chassis (sb_readonly=0)
Nov 23 21:07:54 compute-1 nova_compute[230183]: 2025-11-23 21:07:54.913 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.436 230187 DEBUG nova.compute.manager [req-d3a780de-ddd5-46c8-9ead-3ee9ea347c12 req-b57ce57d-daef-4102-89c7-6bac34bd8d09 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received event network-changed-f23315bc-0f2d-4e45-91a2-0f72a8929b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.437 230187 DEBUG nova.compute.manager [req-d3a780de-ddd5-46c8-9ead-3ee9ea347c12 req-b57ce57d-daef-4102-89c7-6bac34bd8d09 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Refreshing instance network info cache due to event network-changed-f23315bc-0f2d-4e45-91a2-0f72a8929b88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.437 230187 DEBUG oslo_concurrency.lockutils [req-d3a780de-ddd5-46c8-9ead-3ee9ea347c12 req-b57ce57d-daef-4102-89c7-6bac34bd8d09 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.437 230187 DEBUG oslo_concurrency.lockutils [req-d3a780de-ddd5-46c8-9ead-3ee9ea347c12 req-b57ce57d-daef-4102-89c7-6bac34bd8d09 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.437 230187 DEBUG nova.network.neutron [req-d3a780de-ddd5-46c8-9ead-3ee9ea347c12 req-b57ce57d-daef-4102-89c7-6bac34bd8d09 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Refreshing network info cache for port f23315bc-0f2d-4e45-91a2-0f72a8929b88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.520 230187 DEBUG oslo_concurrency.lockutils [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "b88f69cf-a706-408d-8dd0-9c891ac278df" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:07:55 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:55 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb130003b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.520 230187 DEBUG oslo_concurrency.lockutils [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.521 230187 DEBUG oslo_concurrency.lockutils [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.521 230187 DEBUG oslo_concurrency.lockutils [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.522 230187 DEBUG oslo_concurrency.lockutils [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.523 230187 INFO nova.compute.manager [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Terminating instance
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.524 230187 DEBUG nova.compute.manager [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 23 21:07:55 compute-1 kernel: tapf23315bc-0f (unregistering): left promiscuous mode
Nov 23 21:07:55 compute-1 NetworkManager[49021]: <info>  [1763932075.5846] device (tapf23315bc-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.593 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:55 compute-1 ovn_controller[132845]: 2025-11-23T21:07:55Z|00033|binding|INFO|Releasing lport f23315bc-0f2d-4e45-91a2-0f72a8929b88 from this chassis (sb_readonly=0)
Nov 23 21:07:55 compute-1 ovn_controller[132845]: 2025-11-23T21:07:55Z|00034|binding|INFO|Setting lport f23315bc-0f2d-4e45-91a2-0f72a8929b88 down in Southbound
Nov 23 21:07:55 compute-1 ovn_controller[132845]: 2025-11-23T21:07:55Z|00035|binding|INFO|Removing iface tapf23315bc-0f ovn-installed in OVS
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.597 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.604 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:7a:f0 10.100.0.10'], port_security=['fa:16:3e:6f:7a:f0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b88f69cf-a706-408d-8dd0-9c891ac278df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7aadcd86-30a0-48ed-988a-324cae3af3e6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd2fd5313-3792-44d3-ba44-78e423066c2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33862b60-f5fc-47c1-8327-a9c7a8a97ff8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=f23315bc-0f2d-4e45-91a2-0f72a8929b88) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:07:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.605 142158 INFO neutron.agent.ovn.metadata.agent [-] Port f23315bc-0f2d-4e45-91a2-0f72a8929b88 in datapath 7aadcd86-30a0-48ed-988a-324cae3af3e6 unbound from our chassis
Nov 23 21:07:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.607 142158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7aadcd86-30a0-48ed-988a-324cae3af3e6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 23 21:07:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.608 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[fc8759fe-b200-4ce7-bdf0-ea5ae9567bf3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:07:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.609 142158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6 namespace which is not needed anymore
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.613 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:55 compute-1 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Nov 23 21:07:55 compute-1 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 16.590s CPU time.
Nov 23 21:07:55 compute-1 systemd-machined[193469]: Machine qemu-1-instance-00000001 terminated.
Nov 23 21:07:55 compute-1 neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6[233999]: [NOTICE]   (234013) : haproxy version is 2.8.14-c23fe91
Nov 23 21:07:55 compute-1 neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6[233999]: [NOTICE]   (234013) : path to executable is /usr/sbin/haproxy
Nov 23 21:07:55 compute-1 neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6[233999]: [WARNING]  (234013) : Exiting Master process...
Nov 23 21:07:55 compute-1 neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6[233999]: [ALERT]    (234013) : Current worker (234022) exited with code 143 (Terminated)
Nov 23 21:07:55 compute-1 neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6[233999]: [WARNING]  (234013) : All workers exited. Exiting... (0)
Nov 23 21:07:55 compute-1 systemd[1]: libpod-788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6.scope: Deactivated successfully.
Nov 23 21:07:55 compute-1 kernel: tapf23315bc-0f: entered promiscuous mode
Nov 23 21:07:55 compute-1 kernel: tapf23315bc-0f (unregistering): left promiscuous mode
Nov 23 21:07:55 compute-1 podman[234460]: 2025-11-23 21:07:55.741544572 +0000 UTC m=+0.041453340 container died 788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 21:07:55 compute-1 NetworkManager[49021]: <info>  [1763932075.7433] manager: (tapf23315bc-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/31)
Nov 23 21:07:55 compute-1 ovn_controller[132845]: 2025-11-23T21:07:55Z|00036|binding|INFO|Claiming lport f23315bc-0f2d-4e45-91a2-0f72a8929b88 for this chassis.
Nov 23 21:07:55 compute-1 ovn_controller[132845]: 2025-11-23T21:07:55Z|00037|binding|INFO|f23315bc-0f2d-4e45-91a2-0f72a8929b88: Claiming fa:16:3e:6f:7a:f0 10.100.0.10
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.751 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.753 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:7a:f0 10.100.0.10'], port_security=['fa:16:3e:6f:7a:f0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b88f69cf-a706-408d-8dd0-9c891ac278df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7aadcd86-30a0-48ed-988a-324cae3af3e6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd2fd5313-3792-44d3-ba44-78e423066c2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33862b60-f5fc-47c1-8327-a9c7a8a97ff8, chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=f23315bc-0f2d-4e45-91a2-0f72a8929b88) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.766 230187 INFO nova.virt.libvirt.driver [-] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Instance destroyed successfully.
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.766 230187 DEBUG nova.objects.instance [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'resources' on Instance uuid b88f69cf-a706-408d-8dd0-9c891ac278df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.770 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:55 compute-1 ovn_controller[132845]: 2025-11-23T21:07:55Z|00038|binding|INFO|Setting lport f23315bc-0f2d-4e45-91a2-0f72a8929b88 ovn-installed in OVS
Nov 23 21:07:55 compute-1 ovn_controller[132845]: 2025-11-23T21:07:55Z|00039|binding|INFO|Setting lport f23315bc-0f2d-4e45-91a2-0f72a8929b88 up in Southbound
Nov 23 21:07:55 compute-1 ovn_controller[132845]: 2025-11-23T21:07:55Z|00040|binding|INFO|Releasing lport f23315bc-0f2d-4e45-91a2-0f72a8929b88 from this chassis (sb_readonly=1)
Nov 23 21:07:55 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6-userdata-shm.mount: Deactivated successfully.
Nov 23 21:07:55 compute-1 ovn_controller[132845]: 2025-11-23T21:07:55Z|00041|if_status|INFO|Not setting lport f23315bc-0f2d-4e45-91a2-0f72a8929b88 down as sb is readonly
Nov 23 21:07:55 compute-1 ovn_controller[132845]: 2025-11-23T21:07:55Z|00042|binding|INFO|Removing iface tapf23315bc-0f ovn-installed in OVS
Nov 23 21:07:55 compute-1 systemd[1]: var-lib-containers-storage-overlay-b4c5263137e4ec5459bb96e030a6c0c4608606faf4bf3953f654301e7654b612-merged.mount: Deactivated successfully.
Nov 23 21:07:55 compute-1 ovn_controller[132845]: 2025-11-23T21:07:55Z|00043|binding|INFO|Releasing lport f23315bc-0f2d-4e45-91a2-0f72a8929b88 from this chassis (sb_readonly=0)
Nov 23 21:07:55 compute-1 ovn_controller[132845]: 2025-11-23T21:07:55Z|00044|binding|INFO|Setting lport f23315bc-0f2d-4e45-91a2-0f72a8929b88 down in Southbound
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.781 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.785 230187 DEBUG nova.virt.libvirt.vif [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:06:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2101370279',display_name='tempest-TestNetworkBasicOps-server-2101370279',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2101370279',id=1,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPkopXsVozaBPjiL+h6NejRz4cW0k9/uA5JpHUVBsNmasGNuNCs7C0SGQ6LkonC2lifS0mLNUtTMnfgtFGQBRj5+CsXOBseSmB+++OQ3W87ZPdTUTnkg9uBrGbnjrus9+A==',key_name='tempest-TestNetworkBasicOps-1595128200',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:06:45Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-97azc21p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:06:45Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=b88f69cf-a706-408d-8dd0-9c891ac278df,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "address": "fa:16:3e:6f:7a:f0", "network": {"id": "7aadcd86-30a0-48ed-988a-324cae3af3e6", "bridge": "br-int", "label": "tempest-network-smoke--57523881", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf23315bc-0f", "ovs_interfaceid": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.785 230187 DEBUG nova.network.os_vif_util [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "address": "fa:16:3e:6f:7a:f0", "network": {"id": "7aadcd86-30a0-48ed-988a-324cae3af3e6", "bridge": "br-int", "label": "tempest-network-smoke--57523881", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf23315bc-0f", "ovs_interfaceid": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.786 230187 DEBUG nova.network.os_vif_util [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6f:7a:f0,bridge_name='br-int',has_traffic_filtering=True,id=f23315bc-0f2d-4e45-91a2-0f72a8929b88,network=Network(7aadcd86-30a0-48ed-988a-324cae3af3e6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf23315bc-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.787 230187 DEBUG os_vif [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6f:7a:f0,bridge_name='br-int',has_traffic_filtering=True,id=f23315bc-0f2d-4e45-91a2-0f72a8929b88,network=Network(7aadcd86-30a0-48ed-988a-324cae3af3e6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf23315bc-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 23 21:07:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.788 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:7a:f0 10.100.0.10'], port_security=['fa:16:3e:6f:7a:f0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b88f69cf-a706-408d-8dd0-9c891ac278df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7aadcd86-30a0-48ed-988a-324cae3af3e6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd2fd5313-3792-44d3-ba44-78e423066c2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33862b60-f5fc-47c1-8327-a9c7a8a97ff8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=f23315bc-0f2d-4e45-91a2-0f72a8929b88) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.789 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.790 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf23315bc-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.791 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.792 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:07:55 compute-1 podman[234460]: 2025-11-23 21:07:55.792017541 +0000 UTC m=+0.091926309 container cleanup 788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.794 230187 INFO os_vif [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6f:7a:f0,bridge_name='br-int',has_traffic_filtering=True,id=f23315bc-0f2d-4e45-91a2-0f72a8929b88,network=Network(7aadcd86-30a0-48ed-988a-324cae3af3e6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf23315bc-0f')
Nov 23 21:07:55 compute-1 systemd[1]: libpod-conmon-788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6.scope: Deactivated successfully.
Nov 23 21:07:55 compute-1 podman[234500]: 2025-11-23 21:07:55.873815349 +0000 UTC m=+0.057067634 container remove 788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 21:07:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.883 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[82b4039e-b4b2-406c-ab35-80d55b062718]: (4, ('Sun Nov 23 09:07:55 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6 (788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6)\n788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6\nSun Nov 23 09:07:55 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6 (788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6)\n788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:07:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.885 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[af728f66-5943-4a37-a338-01e47a8d02da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:07:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.886 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7aadcd86-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:07:55 compute-1 kernel: tap7aadcd86-30: left promiscuous mode
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.892 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.894 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[dd7cbdd0-d31d-4c11-8788-1bf7f84b10c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:07:55 compute-1 nova_compute[230183]: 2025-11-23 21:07:55.902 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.908 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[a4694ce6-7d56-4a8d-9fe1-7dd272908c1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:07:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.910 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5e12edf8-4c66-4cff-9263-9eb665d3f5e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:07:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.924 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[8ccb1537-af54-4aa0-84b5-c5c37209bb22]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 395714, 'reachable_time': 29894, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 234526, 'error': None, 'target': 'ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:07:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.936 142272 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 23 21:07:55 compute-1 systemd[1]: run-netns-ovnmeta\x2d7aadcd86\x2d30a0\x2d48ed\x2d988a\x2d324cae3af3e6.mount: Deactivated successfully.
Nov 23 21:07:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.936 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[6e42b2c8-bdc4-4805-9165-07af1b2666ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:07:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.938 142158 INFO neutron.agent.ovn.metadata.agent [-] Port f23315bc-0f2d-4e45-91a2-0f72a8929b88 in datapath 7aadcd86-30a0-48ed-988a-324cae3af3e6 unbound from our chassis
Nov 23 21:07:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.939 142158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7aadcd86-30a0-48ed-988a-324cae3af3e6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 23 21:07:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.940 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[e38ab74d-0e01-4715-b901-4759666a7dba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:07:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.940 142158 INFO neutron.agent.ovn.metadata.agent [-] Port f23315bc-0f2d-4e45-91a2-0f72a8929b88 in datapath 7aadcd86-30a0-48ed-988a-324cae3af3e6 unbound from our chassis
Nov 23 21:07:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.941 142158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7aadcd86-30a0-48ed-988a-324cae3af3e6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 23 21:07:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.942 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ad4dfe-5bfd-42e4-b082-c47b36f19123]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:07:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:56.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:56 compute-1 nova_compute[230183]: 2025-11-23 21:07:56.196 230187 INFO nova.virt.libvirt.driver [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Deleting instance files /var/lib/nova/instances/b88f69cf-a706-408d-8dd0-9c891ac278df_del
Nov 23 21:07:56 compute-1 nova_compute[230183]: 2025-11-23 21:07:56.197 230187 INFO nova.virt.libvirt.driver [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Deletion of /var/lib/nova/instances/b88f69cf-a706-408d-8dd0-9c891ac278df_del complete
Nov 23 21:07:56 compute-1 nova_compute[230183]: 2025-11-23 21:07:56.256 230187 DEBUG nova.virt.libvirt.host [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Nov 23 21:07:56 compute-1 nova_compute[230183]: 2025-11-23 21:07:56.257 230187 INFO nova.virt.libvirt.host [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] UEFI support detected
Nov 23 21:07:56 compute-1 nova_compute[230183]: 2025-11-23 21:07:56.258 230187 INFO nova.compute.manager [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Took 0.73 seconds to destroy the instance on the hypervisor.
Nov 23 21:07:56 compute-1 nova_compute[230183]: 2025-11-23 21:07:56.259 230187 DEBUG oslo.service.loopingcall [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 23 21:07:56 compute-1 nova_compute[230183]: 2025-11-23 21:07:56.259 230187 DEBUG nova.compute.manager [-] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 23 21:07:56 compute-1 nova_compute[230183]: 2025-11-23 21:07:56.260 230187 DEBUG nova.network.neutron [-] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 23 21:07:56 compute-1 ceph-mon[80135]: pgmap v795: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 337 KiB/s rd, 1.2 MiB/s wr, 85 op/s
Nov 23 21:07:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:56 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:56 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:56 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:56 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:07:56 compute-1 nova_compute[230183]: 2025-11-23 21:07:56.623 230187 DEBUG nova.network.neutron [req-d3a780de-ddd5-46c8-9ead-3ee9ea347c12 req-b57ce57d-daef-4102-89c7-6bac34bd8d09 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Updated VIF entry in instance network info cache for port f23315bc-0f2d-4e45-91a2-0f72a8929b88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 23 21:07:56 compute-1 nova_compute[230183]: 2025-11-23 21:07:56.624 230187 DEBUG nova.network.neutron [req-d3a780de-ddd5-46c8-9ead-3ee9ea347c12 req-b57ce57d-daef-4102-89c7-6bac34bd8d09 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Updating instance_info_cache with network_info: [{"id": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "address": "fa:16:3e:6f:7a:f0", "network": {"id": "7aadcd86-30a0-48ed-988a-324cae3af3e6", "bridge": "br-int", "label": "tempest-network-smoke--57523881", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf23315bc-0f", "ovs_interfaceid": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:07:56 compute-1 nova_compute[230183]: 2025-11-23 21:07:56.641 230187 DEBUG oslo_concurrency.lockutils [req-d3a780de-ddd5-46c8-9ead-3ee9ea347c12 req-b57ce57d-daef-4102-89c7-6bac34bd8d09 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:07:56 compute-1 nova_compute[230183]: 2025-11-23 21:07:56.759 230187 DEBUG nova.network.neutron [-] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:07:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:56.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:56 compute-1 nova_compute[230183]: 2025-11-23 21:07:56.801 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:07:56 compute-1 sshd-session[234529]: Invalid user solv from 161.35.133.66 port 45772
Nov 23 21:07:56 compute-1 sshd-session[234529]: Connection closed by invalid user solv 161.35.133.66 port 45772 [preauth]
Nov 23 21:07:56 compute-1 nova_compute[230183]: 2025-11-23 21:07:56.835 230187 INFO nova.compute.manager [-] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Took 0.58 seconds to deallocate network for instance.
Nov 23 21:07:56 compute-1 nova_compute[230183]: 2025-11-23 21:07:56.876 230187 DEBUG oslo_concurrency.lockutils [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:07:56 compute-1 nova_compute[230183]: 2025-11-23 21:07:56.877 230187 DEBUG oslo_concurrency.lockutils [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:07:56 compute-1 nova_compute[230183]: 2025-11-23 21:07:56.929 230187 DEBUG oslo_concurrency.processutils [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:07:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:07:57 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4131247952' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.384 230187 DEBUG oslo_concurrency.processutils [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.394 230187 DEBUG nova.compute.provider_tree [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.410 230187 DEBUG nova.scheduler.client.report [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.422 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.434 230187 DEBUG oslo_concurrency.lockutils [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.477 230187 INFO nova.scheduler.client.report [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Deleted allocations for instance b88f69cf-a706-408d-8dd0-9c891ac278df
Nov 23 21:07:57 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:57 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.556 230187 DEBUG nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received event network-vif-unplugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.557 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.558 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.558 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.559 230187 DEBUG nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] No waiting events found dispatching network-vif-unplugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.559 230187 WARNING nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received unexpected event network-vif-unplugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 for instance with vm_state deleted and task_state None.
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.560 230187 DEBUG nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received event network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.560 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.561 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.561 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.562 230187 DEBUG nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] No waiting events found dispatching network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.563 230187 WARNING nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received unexpected event network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 for instance with vm_state deleted and task_state None.
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.563 230187 DEBUG nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received event network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.564 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.564 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.565 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.565 230187 DEBUG nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] No waiting events found dispatching network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.566 230187 WARNING nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received unexpected event network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 for instance with vm_state deleted and task_state None.
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.566 230187 DEBUG nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received event network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.567 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.567 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.568 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.568 230187 DEBUG nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] No waiting events found dispatching network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.569 230187 WARNING nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received unexpected event network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 for instance with vm_state deleted and task_state None.
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.569 230187 DEBUG nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received event network-vif-unplugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.570 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.570 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.571 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.571 230187 DEBUG nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] No waiting events found dispatching network-vif-unplugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.572 230187 WARNING nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received unexpected event network-vif-unplugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 for instance with vm_state deleted and task_state None.
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.572 230187 DEBUG nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received event network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.573 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.573 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.574 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.574 230187 DEBUG nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] No waiting events found dispatching network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.575 230187 WARNING nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received unexpected event network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 for instance with vm_state deleted and task_state None.
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.576 230187 DEBUG nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received event network-vif-deleted-f23315bc-0f2d-4e45-91a2-0f72a8929b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:07:57 compute-1 nova_compute[230183]: 2025-11-23 21:07:57.581 230187 DEBUG oslo_concurrency.lockutils [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:07:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:58.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:58 compute-1 ceph-mon[80135]: pgmap v796: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 23 KiB/s wr, 29 op/s
Nov 23 21:07:58 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/4131247952' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:07:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:58 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb130003b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:58 compute-1 nova_compute[230183]: 2025-11-23 21:07:58.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:07:58 compute-1 nova_compute[230183]: 2025-11-23 21:07:58.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 23 21:07:58 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:58 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:07:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:07:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:58.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:07:59 compute-1 nova_compute[230183]: 2025-11-23 21:07:59.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:07:59 compute-1 nova_compute[230183]: 2025-11-23 21:07:59.426 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 23 21:07:59 compute-1 nova_compute[230183]: 2025-11-23 21:07:59.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 23 21:07:59 compute-1 nova_compute[230183]: 2025-11-23 21:07:59.446 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 23 21:07:59 compute-1 nova_compute[230183]: 2025-11-23 21:07:59.446 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:07:59 compute-1 nova_compute[230183]: 2025-11-23 21:07:59.447 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:07:59 compute-1 nova_compute[230183]: 2025-11-23 21:07:59.473 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:07:59 compute-1 nova_compute[230183]: 2025-11-23 21:07:59.473 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:07:59 compute-1 nova_compute[230183]: 2025-11-23 21:07:59.473 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:07:59 compute-1 nova_compute[230183]: 2025-11-23 21:07:59.473 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:07:59 compute-1 nova_compute[230183]: 2025-11-23 21:07:59.474 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:07:59 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:59 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c002e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:07:59 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:07:59 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2288679670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:07:59 compute-1 nova_compute[230183]: 2025-11-23 21:07:59.943 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:08:00 compute-1 nova_compute[230183]: 2025-11-23 21:08:00.098 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:08:00 compute-1 nova_compute[230183]: 2025-11-23 21:08:00.099 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4933MB free_disk=59.94269561767578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:08:00 compute-1 nova_compute[230183]: 2025-11-23 21:08:00.099 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:08:00 compute-1 nova_compute[230183]: 2025-11-23 21:08:00.099 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:08:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:00.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:00 compute-1 nova_compute[230183]: 2025-11-23 21:08:00.167 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:08:00 compute-1 nova_compute[230183]: 2025-11-23 21:08:00.168 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:08:00 compute-1 nova_compute[230183]: 2025-11-23 21:08:00.196 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:08:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:08:00 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118004010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 21:08:00 compute-1 ceph-mon[80135]: pgmap v797: 337 pgs: 337 active+clean; 118 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 23 KiB/s wr, 30 op/s
Nov 23 21:08:00 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2288679670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:08:00 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:08:00 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb130003b20 fd 38 proxy ignored for local
Nov 23 21:08:00 compute-1 kernel: ganesha.nfsd[234330]: segfault at 50 ip 00007fb1ed0b932e sp 00007fb1ae7fb210 error 4 in libntirpc.so.5.8[7fb1ed09e000+2c000] likely on CPU 5 (core 0, socket 5)
Nov 23 21:08:00 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 21:08:00 compute-1 systemd[1]: Started Process Core Dump (PID 234598/UID 0).
Nov 23 21:08:00 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:08:00 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2263582620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:08:00 compute-1 nova_compute[230183]: 2025-11-23 21:08:00.644 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:08:00 compute-1 nova_compute[230183]: 2025-11-23 21:08:00.649 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:08:00 compute-1 nova_compute[230183]: 2025-11-23 21:08:00.675 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:08:00 compute-1 nova_compute[230183]: 2025-11-23 21:08:00.703 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 23 21:08:00 compute-1 nova_compute[230183]: 2025-11-23 21:08:00.703 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:08:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:00.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:00 compute-1 nova_compute[230183]: 2025-11-23 21:08:00.805 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:01 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2263582620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:08:01 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:08:01 compute-1 nova_compute[230183]: 2025-11-23 21:08:01.547 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:01 compute-1 systemd-coredump[234599]: Process 233269 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 57:
                                                    #0  0x00007fb1ed0b932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 23 21:08:01 compute-1 nova_compute[230183]: 2025-11-23 21:08:01.645 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:01 compute-1 systemd[1]: systemd-coredump@12-234598-0.service: Deactivated successfully.
Nov 23 21:08:01 compute-1 systemd[1]: systemd-coredump@12-234598-0.service: Consumed 1.100s CPU time.
Nov 23 21:08:01 compute-1 nova_compute[230183]: 2025-11-23 21:08:01.684 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:08:01 compute-1 nova_compute[230183]: 2025-11-23 21:08:01.696 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:08:01 compute-1 nova_compute[230183]: 2025-11-23 21:08:01.696 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:08:01 compute-1 podman[234608]: 2025-11-23 21:08:01.74067205 +0000 UTC m=+0.042361055 container died d5b74120fbf861ec21b580a080981227bcd9c52288af0a95ae65bbf92f739f0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 23 21:08:01 compute-1 systemd[1]: var-lib-containers-storage-overlay-24653c180c9318b519976f965307614ae6e36c0f21676083060c7a6287ff60f0-merged.mount: Deactivated successfully.
Nov 23 21:08:01 compute-1 podman[234608]: 2025-11-23 21:08:01.777020548 +0000 UTC m=+0.078709563 container remove d5b74120fbf861ec21b580a080981227bcd9c52288af0a95ae65bbf92f739f0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 23 21:08:01 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Main process exited, code=exited, status=139/n/a
Nov 23 21:08:01 compute-1 nova_compute[230183]: 2025-11-23 21:08:01.831 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:01 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Failed with result 'exit-code'.
Nov 23 21:08:01 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.483s CPU time.
Nov 23 21:08:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:02.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:02 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210802 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 21:08:02 compute-1 ceph-mon[80135]: pgmap v798: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 24 KiB/s wr, 57 op/s
Nov 23 21:08:02 compute-1 nova_compute[230183]: 2025-11-23 21:08:02.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:08:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:02.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:03 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/444257315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:08:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:08:03 compute-1 sudo[234652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:08:03 compute-1 sudo[234652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:08:03 compute-1 sudo[234652]: pam_unix(sudo:session): session closed for user root
Nov 23 21:08:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:04.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:04 compute-1 ceph-mon[80135]: pgmap v799: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.7 KiB/s wr, 50 op/s
Nov 23 21:08:04 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2500973786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:08:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:04.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:05 compute-1 nova_compute[230183]: 2025-11-23 21:08:05.807 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:06.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:06 compute-1 ceph-mon[80135]: pgmap v800: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.7 KiB/s wr, 50 op/s
Nov 23 21:08:06 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210806 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 21:08:06 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:08:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:06.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:06 compute-1 nova_compute[230183]: 2025-11-23 21:08:06.864 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:07 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1512843561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:08:07 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2123520336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:08:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:08.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:08 compute-1 ceph-mon[80135]: pgmap v801: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 23 21:08:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/3522098395' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:08:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/3522098395' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:08:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:08:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:08.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:08:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:10.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:10 compute-1 ceph-mon[80135]: pgmap v802: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 23 21:08:10 compute-1 nova_compute[230183]: 2025-11-23 21:08:10.763 230187 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763932075.7610657, b88f69cf-a706-408d-8dd0-9c891ac278df => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:08:10 compute-1 nova_compute[230183]: 2025-11-23 21:08:10.763 230187 INFO nova.compute.manager [-] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] VM Stopped (Lifecycle Event)
Nov 23 21:08:10 compute-1 nova_compute[230183]: 2025-11-23 21:08:10.779 230187 DEBUG nova.compute.manager [None req-a87ce78c-0b2c-4bd5-ae88-8c1b0b7ab7f8 - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:08:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:08:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:10.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:08:10 compute-1 nova_compute[230183]: 2025-11-23 21:08:10.807 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:11 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:11.350 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:08:11 compute-1 nova_compute[230183]: 2025-11-23 21:08:11.350 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:11 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:11.351 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 23 21:08:11 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:08:11 compute-1 nova_compute[230183]: 2025-11-23 21:08:11.866 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:12 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Scheduled restart job, restart counter is at 13.
Nov 23 21:08:12 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 21:08:12 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.483s CPU time.
Nov 23 21:08:12 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Start request repeated too quickly.
Nov 23 21:08:12 compute-1 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Failed with result 'exit-code'.
Nov 23 21:08:12 compute-1 systemd[1]: Failed to start Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 21:08:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:12.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:12 compute-1 ceph-mon[80135]: pgmap v803: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 23 21:08:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.002000055s ======
Nov 23 21:08:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:12.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Nov 23 21:08:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:08:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:14.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:08:14 compute-1 ceph-mon[80135]: pgmap v804: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 23 21:08:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:08:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:14.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:08:14 compute-1 nova_compute[230183]: 2025-11-23 21:08:14.906 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:08:14 compute-1 nova_compute[230183]: 2025-11-23 21:08:14.906 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:08:14 compute-1 nova_compute[230183]: 2025-11-23 21:08:14.925 230187 DEBUG nova.compute.manager [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 23 21:08:15 compute-1 nova_compute[230183]: 2025-11-23 21:08:15.004 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:08:15 compute-1 nova_compute[230183]: 2025-11-23 21:08:15.004 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:08:15 compute-1 nova_compute[230183]: 2025-11-23 21:08:15.011 230187 DEBUG nova.virt.hardware [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 23 21:08:15 compute-1 nova_compute[230183]: 2025-11-23 21:08:15.011 230187 INFO nova.compute.claims [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Claim successful on node compute-1.ctlplane.example.com
Nov 23 21:08:15 compute-1 nova_compute[230183]: 2025-11-23 21:08:15.095 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:08:15 compute-1 nova_compute[230183]: 2025-11-23 21:08:15.524 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:08:15 compute-1 nova_compute[230183]: 2025-11-23 21:08:15.531 230187 DEBUG nova.compute.provider_tree [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:08:15 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/74443691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:08:15 compute-1 nova_compute[230183]: 2025-11-23 21:08:15.553 230187 DEBUG nova.scheduler.client.report [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:08:15 compute-1 nova_compute[230183]: 2025-11-23 21:08:15.580 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:08:15 compute-1 nova_compute[230183]: 2025-11-23 21:08:15.581 230187 DEBUG nova.compute.manager [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 23 21:08:15 compute-1 nova_compute[230183]: 2025-11-23 21:08:15.663 230187 DEBUG nova.compute.manager [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 23 21:08:15 compute-1 nova_compute[230183]: 2025-11-23 21:08:15.663 230187 DEBUG nova.network.neutron [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 23 21:08:15 compute-1 nova_compute[230183]: 2025-11-23 21:08:15.714 230187 INFO nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 23 21:08:15 compute-1 nova_compute[230183]: 2025-11-23 21:08:15.734 230187 DEBUG nova.compute.manager [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 23 21:08:15 compute-1 nova_compute[230183]: 2025-11-23 21:08:15.809 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:15 compute-1 nova_compute[230183]: 2025-11-23 21:08:15.819 230187 DEBUG nova.compute.manager [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 23 21:08:15 compute-1 nova_compute[230183]: 2025-11-23 21:08:15.820 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 23 21:08:15 compute-1 nova_compute[230183]: 2025-11-23 21:08:15.821 230187 INFO nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Creating image(s)
Nov 23 21:08:15 compute-1 nova_compute[230183]: 2025-11-23 21:08:15.850 230187 DEBUG nova.storage.rbd_utils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:08:15 compute-1 nova_compute[230183]: 2025-11-23 21:08:15.877 230187 DEBUG nova.storage.rbd_utils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:08:15 compute-1 nova_compute[230183]: 2025-11-23 21:08:15.906 230187 DEBUG nova.storage.rbd_utils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:08:15 compute-1 nova_compute[230183]: 2025-11-23 21:08:15.910 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:08:15 compute-1 nova_compute[230183]: 2025-11-23 21:08:15.963 230187 DEBUG nova.policy [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9fb5352c62684f2ba3a326a953a10dfe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '782593db60784ab8bff41fe87d72ff5f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 23 21:08:15 compute-1 nova_compute[230183]: 2025-11-23 21:08:15.991 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:08:15 compute-1 nova_compute[230183]: 2025-11-23 21:08:15.991 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:08:15 compute-1 nova_compute[230183]: 2025-11-23 21:08:15.992 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:08:15 compute-1 nova_compute[230183]: 2025-11-23 21:08:15.992 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:08:16 compute-1 nova_compute[230183]: 2025-11-23 21:08:16.016 230187 DEBUG nova.storage.rbd_utils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:08:16 compute-1 nova_compute[230183]: 2025-11-23 21:08:16.019 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:08:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:16.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:16 compute-1 nova_compute[230183]: 2025-11-23 21:08:16.279 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:08:16 compute-1 nova_compute[230183]: 2025-11-23 21:08:16.339 230187 DEBUG nova.storage.rbd_utils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] resizing rbd image 451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 23 21:08:16 compute-1 nova_compute[230183]: 2025-11-23 21:08:16.444 230187 DEBUG nova.objects.instance [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'migration_context' on Instance uuid 451aa9f7-4cd0-413e-beed-8a30a8685ff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:08:16 compute-1 nova_compute[230183]: 2025-11-23 21:08:16.460 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 23 21:08:16 compute-1 nova_compute[230183]: 2025-11-23 21:08:16.461 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Ensure instance console log exists: /var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 23 21:08:16 compute-1 nova_compute[230183]: 2025-11-23 21:08:16.461 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:08:16 compute-1 nova_compute[230183]: 2025-11-23 21:08:16.462 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:08:16 compute-1 nova_compute[230183]: 2025-11-23 21:08:16.462 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:08:16 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:08:16 compute-1 ceph-mon[80135]: pgmap v805: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Nov 23 21:08:16 compute-1 podman[234872]: 2025-11-23 21:08:16.660935032 +0000 UTC m=+0.065664862 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 23 21:08:16 compute-1 podman[234871]: 2025-11-23 21:08:16.720914684 +0000 UTC m=+0.126659302 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 21:08:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:16.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:16 compute-1 nova_compute[230183]: 2025-11-23 21:08:16.910 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:17 compute-1 nova_compute[230183]: 2025-11-23 21:08:17.086 230187 DEBUG nova.network.neutron [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Successfully created port: 932faebb-b274-4e17-94a9-9339a27c275f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 23 21:08:18 compute-1 nova_compute[230183]: 2025-11-23 21:08:18.183 230187 DEBUG nova.network.neutron [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Successfully updated port: 932faebb-b274-4e17-94a9-9339a27c275f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 23 21:08:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:18.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:18 compute-1 nova_compute[230183]: 2025-11-23 21:08:18.195 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:08:18 compute-1 nova_compute[230183]: 2025-11-23 21:08:18.195 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquired lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:08:18 compute-1 nova_compute[230183]: 2025-11-23 21:08:18.195 230187 DEBUG nova.network.neutron [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 23 21:08:18 compute-1 nova_compute[230183]: 2025-11-23 21:08:18.307 230187 DEBUG nova.compute.manager [req-3a227a43-659c-4147-80cc-c43e34a28cae req-3bcd89bf-e31a-4ee8-96b0-e64ba34047c9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-changed-932faebb-b274-4e17-94a9-9339a27c275f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:08:18 compute-1 nova_compute[230183]: 2025-11-23 21:08:18.307 230187 DEBUG nova.compute.manager [req-3a227a43-659c-4147-80cc-c43e34a28cae req-3bcd89bf-e31a-4ee8-96b0-e64ba34047c9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Refreshing instance network info cache due to event network-changed-932faebb-b274-4e17-94a9-9339a27c275f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 23 21:08:18 compute-1 nova_compute[230183]: 2025-11-23 21:08:18.308 230187 DEBUG oslo_concurrency.lockutils [req-3a227a43-659c-4147-80cc-c43e34a28cae req-3bcd89bf-e31a-4ee8-96b0-e64ba34047c9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:08:18 compute-1 nova_compute[230183]: 2025-11-23 21:08:18.379 230187 DEBUG nova.network.neutron [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 23 21:08:18 compute-1 ceph-mon[80135]: pgmap v806: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Nov 23 21:08:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:08:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:08:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:18.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:08:19 compute-1 nova_compute[230183]: 2025-11-23 21:08:19.380 230187 DEBUG nova.network.neutron [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Updating instance_info_cache with network_info: [{"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:08:19 compute-1 nova_compute[230183]: 2025-11-23 21:08:19.395 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Releasing lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:08:19 compute-1 nova_compute[230183]: 2025-11-23 21:08:19.395 230187 DEBUG nova.compute.manager [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Instance network_info: |[{"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 23 21:08:19 compute-1 nova_compute[230183]: 2025-11-23 21:08:19.395 230187 DEBUG oslo_concurrency.lockutils [req-3a227a43-659c-4147-80cc-c43e34a28cae req-3bcd89bf-e31a-4ee8-96b0-e64ba34047c9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:08:19 compute-1 nova_compute[230183]: 2025-11-23 21:08:19.396 230187 DEBUG nova.network.neutron [req-3a227a43-659c-4147-80cc-c43e34a28cae req-3bcd89bf-e31a-4ee8-96b0-e64ba34047c9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Refreshing network info cache for port 932faebb-b274-4e17-94a9-9339a27c275f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 23 21:08:19 compute-1 nova_compute[230183]: 2025-11-23 21:08:19.398 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Start _get_guest_xml network_info=[{"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'guest_format': None, 'encrypted': False, 'size': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': '3c45fa6c-8a99-4359-a34e-d89f4e1e77d0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 23 21:08:19 compute-1 nova_compute[230183]: 2025-11-23 21:08:19.401 230187 WARNING nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:08:19 compute-1 nova_compute[230183]: 2025-11-23 21:08:19.405 230187 DEBUG nova.virt.libvirt.host [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 23 21:08:19 compute-1 nova_compute[230183]: 2025-11-23 21:08:19.405 230187 DEBUG nova.virt.libvirt.host [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 23 21:08:19 compute-1 nova_compute[230183]: 2025-11-23 21:08:19.410 230187 DEBUG nova.virt.libvirt.host [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 23 21:08:19 compute-1 nova_compute[230183]: 2025-11-23 21:08:19.411 230187 DEBUG nova.virt.libvirt.host [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 23 21:08:19 compute-1 nova_compute[230183]: 2025-11-23 21:08:19.411 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 23 21:08:19 compute-1 nova_compute[230183]: 2025-11-23 21:08:19.411 230187 DEBUG nova.virt.hardware [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-23T21:05:42Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='56044b93-2979-48aa-b67f-c37e1b489306',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 23 21:08:19 compute-1 nova_compute[230183]: 2025-11-23 21:08:19.412 230187 DEBUG nova.virt.hardware [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 23 21:08:19 compute-1 nova_compute[230183]: 2025-11-23 21:08:19.412 230187 DEBUG nova.virt.hardware [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 23 21:08:19 compute-1 nova_compute[230183]: 2025-11-23 21:08:19.412 230187 DEBUG nova.virt.hardware [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 23 21:08:19 compute-1 nova_compute[230183]: 2025-11-23 21:08:19.413 230187 DEBUG nova.virt.hardware [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 23 21:08:19 compute-1 nova_compute[230183]: 2025-11-23 21:08:19.413 230187 DEBUG nova.virt.hardware [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 23 21:08:19 compute-1 nova_compute[230183]: 2025-11-23 21:08:19.413 230187 DEBUG nova.virt.hardware [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 23 21:08:19 compute-1 nova_compute[230183]: 2025-11-23 21:08:19.413 230187 DEBUG nova.virt.hardware [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 23 21:08:19 compute-1 nova_compute[230183]: 2025-11-23 21:08:19.414 230187 DEBUG nova.virt.hardware [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 23 21:08:19 compute-1 nova_compute[230183]: 2025-11-23 21:08:19.414 230187 DEBUG nova.virt.hardware [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 23 21:08:19 compute-1 nova_compute[230183]: 2025-11-23 21:08:19.414 230187 DEBUG nova.virt.hardware [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 23 21:08:19 compute-1 nova_compute[230183]: 2025-11-23 21:08:19.416 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:08:19 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 21:08:19 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3318702537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:08:19 compute-1 nova_compute[230183]: 2025-11-23 21:08:19.867 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:08:19 compute-1 nova_compute[230183]: 2025-11-23 21:08:19.895 230187 DEBUG nova.storage.rbd_utils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:08:19 compute-1 nova_compute[230183]: 2025-11-23 21:08:19.901 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:08:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:20.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:20 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 21:08:20 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1702176561' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.350 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.352 230187 DEBUG nova.virt.libvirt.vif [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:08:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-163368612',display_name='tempest-TestNetworkBasicOps-server-163368612',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-163368612',id=3,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO6ZIj438fQUpVfUUeh9lapkxwknyZNU4rtkhiTUYmBPGvkJZXNdDf4srslhWKNNtoBf1C2D4cd/jBUBjs52xRw75wPIQzFCZ8VrPBNO0yEc0UePTukzbeBIVnoSLQIebA==',key_name='tempest-TestNetworkBasicOps-1883192829',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-ptm322on',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:08:15Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=451aa9f7-4cd0-413e-beed-8a30a8685ff1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.352 230187 DEBUG nova.network.os_vif_util [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.353 230187 DEBUG nova.network.os_vif_util [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:80:b0,bridge_name='br-int',has_traffic_filtering=True,id=932faebb-b274-4e17-94a9-9339a27c275f,network=Network(0cfca448-ff51-45d5-9a96-e7d306414608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap932faebb-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:08:20 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:20.353 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.354 230187 DEBUG nova.objects.instance [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'pci_devices' on Instance uuid 451aa9f7-4cd0-413e-beed-8a30a8685ff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.370 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] End _get_guest_xml xml=<domain type="kvm">
Nov 23 21:08:20 compute-1 nova_compute[230183]:   <uuid>451aa9f7-4cd0-413e-beed-8a30a8685ff1</uuid>
Nov 23 21:08:20 compute-1 nova_compute[230183]:   <name>instance-00000003</name>
Nov 23 21:08:20 compute-1 nova_compute[230183]:   <memory>131072</memory>
Nov 23 21:08:20 compute-1 nova_compute[230183]:   <vcpu>1</vcpu>
Nov 23 21:08:20 compute-1 nova_compute[230183]:   <metadata>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <nova:name>tempest-TestNetworkBasicOps-server-163368612</nova:name>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <nova:creationTime>2025-11-23 21:08:19</nova:creationTime>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <nova:flavor name="m1.nano">
Nov 23 21:08:20 compute-1 nova_compute[230183]:         <nova:memory>128</nova:memory>
Nov 23 21:08:20 compute-1 nova_compute[230183]:         <nova:disk>1</nova:disk>
Nov 23 21:08:20 compute-1 nova_compute[230183]:         <nova:swap>0</nova:swap>
Nov 23 21:08:20 compute-1 nova_compute[230183]:         <nova:ephemeral>0</nova:ephemeral>
Nov 23 21:08:20 compute-1 nova_compute[230183]:         <nova:vcpus>1</nova:vcpus>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       </nova:flavor>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <nova:owner>
Nov 23 21:08:20 compute-1 nova_compute[230183]:         <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 21:08:20 compute-1 nova_compute[230183]:         <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       </nova:owner>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <nova:ports>
Nov 23 21:08:20 compute-1 nova_compute[230183]:         <nova:port uuid="932faebb-b274-4e17-94a9-9339a27c275f">
Nov 23 21:08:20 compute-1 nova_compute[230183]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:         </nova:port>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       </nova:ports>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     </nova:instance>
Nov 23 21:08:20 compute-1 nova_compute[230183]:   </metadata>
Nov 23 21:08:20 compute-1 nova_compute[230183]:   <sysinfo type="smbios">
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <system>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <entry name="manufacturer">RDO</entry>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <entry name="product">OpenStack Compute</entry>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <entry name="serial">451aa9f7-4cd0-413e-beed-8a30a8685ff1</entry>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <entry name="uuid">451aa9f7-4cd0-413e-beed-8a30a8685ff1</entry>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <entry name="family">Virtual Machine</entry>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     </system>
Nov 23 21:08:20 compute-1 nova_compute[230183]:   </sysinfo>
Nov 23 21:08:20 compute-1 nova_compute[230183]:   <os>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <boot dev="hd"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <smbios mode="sysinfo"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:   </os>
Nov 23 21:08:20 compute-1 nova_compute[230183]:   <features>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <acpi/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <apic/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <vmcoreinfo/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:   </features>
Nov 23 21:08:20 compute-1 nova_compute[230183]:   <clock offset="utc">
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <timer name="pit" tickpolicy="delay"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <timer name="hpet" present="no"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:   </clock>
Nov 23 21:08:20 compute-1 nova_compute[230183]:   <cpu mode="host-model" match="exact">
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <topology sockets="1" cores="1" threads="1"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:   </cpu>
Nov 23 21:08:20 compute-1 nova_compute[230183]:   <devices>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <disk type="network" device="disk">
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <driver type="raw" cache="none"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <source protocol="rbd" name="vms/451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk">
Nov 23 21:08:20 compute-1 nova_compute[230183]:         <host name="192.168.122.100" port="6789"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:         <host name="192.168.122.102" port="6789"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:         <host name="192.168.122.101" port="6789"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       </source>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <auth username="openstack">
Nov 23 21:08:20 compute-1 nova_compute[230183]:         <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <target dev="vda" bus="virtio"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <disk type="network" device="cdrom">
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <driver type="raw" cache="none"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <source protocol="rbd" name="vms/451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk.config">
Nov 23 21:08:20 compute-1 nova_compute[230183]:         <host name="192.168.122.100" port="6789"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:         <host name="192.168.122.102" port="6789"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:         <host name="192.168.122.101" port="6789"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       </source>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <auth username="openstack">
Nov 23 21:08:20 compute-1 nova_compute[230183]:         <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <target dev="sda" bus="sata"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <interface type="ethernet">
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <mac address="fa:16:3e:22:80:b0"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <model type="virtio"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <driver name="vhost" rx_queue_size="512"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <mtu size="1442"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <target dev="tap932faebb-b2"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     </interface>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <serial type="pty">
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <log file="/var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/console.log" append="off"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     </serial>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <video>
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <model type="virtio"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     </video>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <input type="tablet" bus="usb"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <rng model="virtio">
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <backend model="random">/dev/urandom</backend>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     </rng>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <controller type="usb" index="0"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     <memballoon model="virtio">
Nov 23 21:08:20 compute-1 nova_compute[230183]:       <stats period="10"/>
Nov 23 21:08:20 compute-1 nova_compute[230183]:     </memballoon>
Nov 23 21:08:20 compute-1 nova_compute[230183]:   </devices>
Nov 23 21:08:20 compute-1 nova_compute[230183]: </domain>
Nov 23 21:08:20 compute-1 nova_compute[230183]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.371 230187 DEBUG nova.compute.manager [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Preparing to wait for external event network-vif-plugged-932faebb-b274-4e17-94a9-9339a27c275f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.371 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.372 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.372 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.373 230187 DEBUG nova.virt.libvirt.vif [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:08:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-163368612',display_name='tempest-TestNetworkBasicOps-server-163368612',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-163368612',id=3,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO6ZIj438fQUpVfUUeh9lapkxwknyZNU4rtkhiTUYmBPGvkJZXNdDf4srslhWKNNtoBf1C2D4cd/jBUBjs52xRw75wPIQzFCZ8VrPBNO0yEc0UePTukzbeBIVnoSLQIebA==',key_name='tempest-TestNetworkBasicOps-1883192829',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-ptm322on',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:08:15Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=451aa9f7-4cd0-413e-beed-8a30a8685ff1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.373 230187 DEBUG nova.network.os_vif_util [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.373 230187 DEBUG nova.network.os_vif_util [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:80:b0,bridge_name='br-int',has_traffic_filtering=True,id=932faebb-b274-4e17-94a9-9339a27c275f,network=Network(0cfca448-ff51-45d5-9a96-e7d306414608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap932faebb-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.374 230187 DEBUG os_vif [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:80:b0,bridge_name='br-int',has_traffic_filtering=True,id=932faebb-b274-4e17-94a9-9339a27c275f,network=Network(0cfca448-ff51-45d5-9a96-e7d306414608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap932faebb-b2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.374 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.375 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.375 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.377 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.378 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap932faebb-b2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.378 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap932faebb-b2, col_values=(('external_ids', {'iface-id': '932faebb-b274-4e17-94a9-9339a27c275f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:22:80:b0', 'vm-uuid': '451aa9f7-4cd0-413e-beed-8a30a8685ff1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.379 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:20 compute-1 NetworkManager[49021]: <info>  [1763932100.3810] manager: (tap932faebb-b2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.384 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.386 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.388 230187 INFO os_vif [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:80:b0,bridge_name='br-int',has_traffic_filtering=True,id=932faebb-b274-4e17-94a9-9339a27c275f,network=Network(0cfca448-ff51-45d5-9a96-e7d306414608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap932faebb-b2')
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.426 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.427 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.427 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No VIF found with MAC fa:16:3e:22:80:b0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.428 230187 INFO nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Using config drive
Nov 23 21:08:20 compute-1 nova_compute[230183]: 2025-11-23 21:08:20.452 230187 DEBUG nova.storage.rbd_utils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:08:20 compute-1 ceph-mon[80135]: pgmap v807: 337 pgs: 337 active+clean; 53 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 823 KiB/s wr, 26 op/s
Nov 23 21:08:20 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3318702537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:08:20 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1702176561' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:08:20 compute-1 podman[235000]: 2025-11-23 21:08:20.656026704 +0000 UTC m=+0.075324860 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 23 21:08:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:20.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:21 compute-1 nova_compute[230183]: 2025-11-23 21:08:21.113 230187 INFO nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Creating config drive at /var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/disk.config
Nov 23 21:08:21 compute-1 nova_compute[230183]: 2025-11-23 21:08:21.123 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpou5qgsff execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:08:21 compute-1 nova_compute[230183]: 2025-11-23 21:08:21.250 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpou5qgsff" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:08:21 compute-1 nova_compute[230183]: 2025-11-23 21:08:21.281 230187 DEBUG nova.storage.rbd_utils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:08:21 compute-1 nova_compute[230183]: 2025-11-23 21:08:21.285 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/disk.config 451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:08:21 compute-1 nova_compute[230183]: 2025-11-23 21:08:21.322 230187 DEBUG nova.network.neutron [req-3a227a43-659c-4147-80cc-c43e34a28cae req-3bcd89bf-e31a-4ee8-96b0-e64ba34047c9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Updated VIF entry in instance network info cache for port 932faebb-b274-4e17-94a9-9339a27c275f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 23 21:08:21 compute-1 nova_compute[230183]: 2025-11-23 21:08:21.323 230187 DEBUG nova.network.neutron [req-3a227a43-659c-4147-80cc-c43e34a28cae req-3bcd89bf-e31a-4ee8-96b0-e64ba34047c9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Updating instance_info_cache with network_info: [{"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:08:21 compute-1 nova_compute[230183]: 2025-11-23 21:08:21.338 230187 DEBUG oslo_concurrency.lockutils [req-3a227a43-659c-4147-80cc-c43e34a28cae req-3bcd89bf-e31a-4ee8-96b0-e64ba34047c9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:08:21 compute-1 nova_compute[230183]: 2025-11-23 21:08:21.452 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/disk.config 451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:08:21 compute-1 nova_compute[230183]: 2025-11-23 21:08:21.453 230187 INFO nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Deleting local config drive /var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/disk.config because it was imported into RBD.
Nov 23 21:08:21 compute-1 kernel: tap932faebb-b2: entered promiscuous mode
Nov 23 21:08:21 compute-1 NetworkManager[49021]: <info>  [1763932101.5037] manager: (tap932faebb-b2): new Tun device (/org/freedesktop/NetworkManager/Devices/33)
Nov 23 21:08:21 compute-1 ovn_controller[132845]: 2025-11-23T21:08:21Z|00045|binding|INFO|Claiming lport 932faebb-b274-4e17-94a9-9339a27c275f for this chassis.
Nov 23 21:08:21 compute-1 ovn_controller[132845]: 2025-11-23T21:08:21Z|00046|binding|INFO|932faebb-b274-4e17-94a9-9339a27c275f: Claiming fa:16:3e:22:80:b0 10.100.0.5
Nov 23 21:08:21 compute-1 nova_compute[230183]: 2025-11-23 21:08:21.506 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.519 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:80:b0 10.100.0.5'], port_security=['fa:16:3e:22:80:b0 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '451aa9f7-4cd0-413e-beed-8a30a8685ff1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cfca448-ff51-45d5-9a96-e7d306414608', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b3669a8c-2edc-4975-aec5-618de39b846f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab9ca556-3834-43fe-9280-f86716cb1ac8, chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=932faebb-b274-4e17-94a9-9339a27c275f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.520 142158 INFO neutron.agent.ovn.metadata.agent [-] Port 932faebb-b274-4e17-94a9-9339a27c275f in datapath 0cfca448-ff51-45d5-9a96-e7d306414608 bound to our chassis
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.521 142158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0cfca448-ff51-45d5-9a96-e7d306414608
Nov 23 21:08:21 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.531 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[1aada236-2f5c-4072-a960-4ba4fc5c95bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.532 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0cfca448-f1 in ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 23 21:08:21 compute-1 systemd-machined[193469]: New machine qemu-2-instance-00000003.
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.533 233901 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0cfca448-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.533 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[f753328a-1356-4b8d-be59-abcbf8a31015]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.534 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[cf145dad-e551-4a8a-bf0e-309dd0b0e11c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.544 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[1326c1c8-a04f-4a52-aa3f-34aee9eca868]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:21 compute-1 systemd[1]: Started Virtual Machine qemu-2-instance-00000003.
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.568 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[d1e7044e-3925-4910-851a-1932692313b2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:21 compute-1 systemd-udevd[235076]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 21:08:21 compute-1 NetworkManager[49021]: <info>  [1763932101.5919] device (tap932faebb-b2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 21:08:21 compute-1 NetworkManager[49021]: <info>  [1763932101.5931] device (tap932faebb-b2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.601 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[1b88f7ce-2346-4291-8f7c-6ec7034382f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:21 compute-1 NetworkManager[49021]: <info>  [1763932101.6080] manager: (tap0cfca448-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/34)
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.606 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[6aaa6f28-2cdc-4d95-99b9-2aff9763fbcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:21 compute-1 ovn_controller[132845]: 2025-11-23T21:08:21Z|00047|binding|INFO|Setting lport 932faebb-b274-4e17-94a9-9339a27c275f ovn-installed in OVS
Nov 23 21:08:21 compute-1 ovn_controller[132845]: 2025-11-23T21:08:21Z|00048|binding|INFO|Setting lport 932faebb-b274-4e17-94a9-9339a27c275f up in Southbound
Nov 23 21:08:21 compute-1 nova_compute[230183]: 2025-11-23 21:08:21.614 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.638 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[772865ed-72e2-46a2-b30e-471b2ad1f263]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.641 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[c24cd3ef-c57d-476b-9e55-a67705b0b485]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:21 compute-1 NetworkManager[49021]: <info>  [1763932101.6577] device (tap0cfca448-f0): carrier: link connected
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.662 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[2a05331e-5f02-4185-831d-ff3038b5024b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.680 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[8eee19be-d932-4cbe-a96c-538dde2fa697]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0cfca448-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:a5:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 405023, 'reachable_time': 41170, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235106, 'error': None, 'target': 'ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.693 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b71050-b303-4443-84ad-07985cdb37ab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb1:a57d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 405023, 'tstamp': 405023}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 235107, 'error': None, 'target': 'ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.709 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[48ed6155-cf74-4b17-a0ab-fd5196ce9d70]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0cfca448-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:a5:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 405023, 'reachable_time': 41170, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 235108, 'error': None, 'target': 'ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.741 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5f08d759-07db-4404-a7c7-b6dcdd9bf68f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.808 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[c7abd8d6-d662-41b8-9636-4b78a8bcd865]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.809 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cfca448-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.809 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.810 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0cfca448-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:08:21 compute-1 nova_compute[230183]: 2025-11-23 21:08:21.811 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:21 compute-1 kernel: tap0cfca448-f0: entered promiscuous mode
Nov 23 21:08:21 compute-1 NetworkManager[49021]: <info>  [1763932101.8131] manager: (tap0cfca448-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.816 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0cfca448-f0, col_values=(('external_ids', {'iface-id': '54600d4f-e167-4eaf-830f-ddc1c402909e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:08:21 compute-1 ovn_controller[132845]: 2025-11-23T21:08:21Z|00049|binding|INFO|Releasing lport 54600d4f-e167-4eaf-830f-ddc1c402909e from this chassis (sb_readonly=0)
Nov 23 21:08:21 compute-1 nova_compute[230183]: 2025-11-23 21:08:21.818 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.819 142158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0cfca448-ff51-45d5-9a96-e7d306414608.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0cfca448-ff51-45d5-9a96-e7d306414608.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.819 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[0bb51c47-c49a-47f8-935a-e47623bdd58b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.820 142158 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: global
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]:     log         /dev/log local0 debug
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]:     log-tag     haproxy-metadata-proxy-0cfca448-ff51-45d5-9a96-e7d306414608
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]:     user        root
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]:     group       root
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]:     maxconn     1024
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]:     pidfile     /var/lib/neutron/external/pids/0cfca448-ff51-45d5-9a96-e7d306414608.pid.haproxy
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]:     daemon
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: defaults
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]:     log global
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]:     mode http
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]:     option httplog
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]:     option dontlognull
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]:     option http-server-close
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]:     option forwardfor
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]:     retries                 3
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]:     timeout http-request    30s
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]:     timeout connect         30s
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]:     timeout client          32s
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]:     timeout server          32s
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]:     timeout http-keep-alive 30s
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: listen listener
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]:     bind 169.254.169.254:80
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]:     server metadata /var/lib/neutron/metadata_proxy
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]:     http-request add-header X-OVN-Network-ID 0cfca448-ff51-45d5-9a96-e7d306414608
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 23 21:08:21 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.821 142158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608', 'env', 'PROCESS_TAG=haproxy-0cfca448-ff51-45d5-9a96-e7d306414608', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0cfca448-ff51-45d5-9a96-e7d306414608.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 23 21:08:21 compute-1 nova_compute[230183]: 2025-11-23 21:08:21.830 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:21 compute-1 nova_compute[230183]: 2025-11-23 21:08:21.912 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.179 230187 DEBUG nova.compute.manager [req-dfe63e0f-b9c9-4d03-8851-1d9f4773f2d5 req-cf274ec6-893c-49dc-8482-2ab7f19b2e86 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-vif-plugged-932faebb-b274-4e17-94a9-9339a27c275f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.180 230187 DEBUG oslo_concurrency.lockutils [req-dfe63e0f-b9c9-4d03-8851-1d9f4773f2d5 req-cf274ec6-893c-49dc-8482-2ab7f19b2e86 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.180 230187 DEBUG oslo_concurrency.lockutils [req-dfe63e0f-b9c9-4d03-8851-1d9f4773f2d5 req-cf274ec6-893c-49dc-8482-2ab7f19b2e86 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.180 230187 DEBUG oslo_concurrency.lockutils [req-dfe63e0f-b9c9-4d03-8851-1d9f4773f2d5 req-cf274ec6-893c-49dc-8482-2ab7f19b2e86 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.180 230187 DEBUG nova.compute.manager [req-dfe63e0f-b9c9-4d03-8851-1d9f4773f2d5 req-cf274ec6-893c-49dc-8482-2ab7f19b2e86 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Processing event network-vif-plugged-932faebb-b274-4e17-94a9-9339a27c275f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 23 21:08:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:08:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:22.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:08:22 compute-1 podman[235140]: 2025-11-23 21:08:22.225017778 +0000 UTC m=+0.068466418 container create 4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 23 21:08:22 compute-1 systemd[1]: Started libpod-conmon-4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa.scope.
Nov 23 21:08:22 compute-1 systemd[1]: Started libcrun container.
Nov 23 21:08:22 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e59bb26b82ad07b4bc95bd3eabbfae128162a27036a9012db8ac3aeadc048e2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 21:08:22 compute-1 podman[235140]: 2025-11-23 21:08:22.195458119 +0000 UTC m=+0.038906799 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 21:08:22 compute-1 podman[235140]: 2025-11-23 21:08:22.293110807 +0000 UTC m=+0.136559437 container init 4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 23 21:08:22 compute-1 podman[235140]: 2025-11-23 21:08:22.297965012 +0000 UTC m=+0.141413642 container start 4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 21:08:22 compute-1 neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608[235192]: [NOTICE]   (235200) : New worker (235202) forked
Nov 23 21:08:22 compute-1 neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608[235192]: [NOTICE]   (235200) : Loading success.
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.347 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932102.3467357, 451aa9f7-4cd0-413e-beed-8a30a8685ff1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.347 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] VM Started (Lifecycle Event)
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.349 230187 DEBUG nova.compute.manager [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.352 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.355 230187 INFO nova.virt.libvirt.driver [-] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Instance spawned successfully.
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.356 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.369 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.375 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.380 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.380 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.381 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.381 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.382 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.382 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.389 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.389 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932102.346841, 451aa9f7-4cd0-413e-beed-8a30a8685ff1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.390 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] VM Paused (Lifecycle Event)
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.409 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.413 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932102.3515668, 451aa9f7-4cd0-413e-beed-8a30a8685ff1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.413 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] VM Resumed (Lifecycle Event)
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.443 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.445 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.478 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.498 230187 INFO nova.compute.manager [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Took 6.68 seconds to spawn the instance on the hypervisor.
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.499 230187 DEBUG nova.compute.manager [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.568 230187 INFO nova.compute.manager [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Took 7.59 seconds to build instance.
Nov 23 21:08:22 compute-1 nova_compute[230183]: 2025-11-23 21:08:22.587 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:08:22 compute-1 ceph-mon[80135]: pgmap v808: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 21:08:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:22.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:24 compute-1 sudo[235213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:08:24 compute-1 sudo[235213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:08:24 compute-1 sudo[235213]: pam_unix(sudo:session): session closed for user root
Nov 23 21:08:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:08:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:24.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:08:24 compute-1 nova_compute[230183]: 2025-11-23 21:08:24.244 230187 DEBUG nova.compute.manager [req-dd4752ca-a477-48e6-ab04-868e85bc27f2 req-b91def5d-6189-407f-b1fe-57d5cfc2c079 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-vif-plugged-932faebb-b274-4e17-94a9-9339a27c275f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:08:24 compute-1 nova_compute[230183]: 2025-11-23 21:08:24.245 230187 DEBUG oslo_concurrency.lockutils [req-dd4752ca-a477-48e6-ab04-868e85bc27f2 req-b91def5d-6189-407f-b1fe-57d5cfc2c079 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:08:24 compute-1 nova_compute[230183]: 2025-11-23 21:08:24.245 230187 DEBUG oslo_concurrency.lockutils [req-dd4752ca-a477-48e6-ab04-868e85bc27f2 req-b91def5d-6189-407f-b1fe-57d5cfc2c079 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:08:24 compute-1 nova_compute[230183]: 2025-11-23 21:08:24.246 230187 DEBUG oslo_concurrency.lockutils [req-dd4752ca-a477-48e6-ab04-868e85bc27f2 req-b91def5d-6189-407f-b1fe-57d5cfc2c079 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:08:24 compute-1 nova_compute[230183]: 2025-11-23 21:08:24.246 230187 DEBUG nova.compute.manager [req-dd4752ca-a477-48e6-ab04-868e85bc27f2 req-b91def5d-6189-407f-b1fe-57d5cfc2c079 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] No waiting events found dispatching network-vif-plugged-932faebb-b274-4e17-94a9-9339a27c275f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:08:24 compute-1 nova_compute[230183]: 2025-11-23 21:08:24.246 230187 WARNING nova.compute.manager [req-dd4752ca-a477-48e6-ab04-868e85bc27f2 req-b91def5d-6189-407f-b1fe-57d5cfc2c079 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received unexpected event network-vif-plugged-932faebb-b274-4e17-94a9-9339a27c275f for instance with vm_state active and task_state None.
Nov 23 21:08:24 compute-1 ceph-mon[80135]: pgmap v809: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 21:08:24 compute-1 sudo[235238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:08:24 compute-1 sudo[235238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:08:24 compute-1 sudo[235238]: pam_unix(sudo:session): session closed for user root
Nov 23 21:08:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:08:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:24.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:08:24 compute-1 sudo[235263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 21:08:24 compute-1 sudo[235263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:08:25 compute-1 sudo[235263]: pam_unix(sudo:session): session closed for user root
Nov 23 21:08:25 compute-1 nova_compute[230183]: 2025-11-23 21:08:25.381 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:25 compute-1 NetworkManager[49021]: <info>  [1763932105.6349] manager: (patch-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Nov 23 21:08:25 compute-1 NetworkManager[49021]: <info>  [1763932105.6361] manager: (patch-br-int-to-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Nov 23 21:08:25 compute-1 nova_compute[230183]: 2025-11-23 21:08:25.635 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:25 compute-1 ovn_controller[132845]: 2025-11-23T21:08:25Z|00050|binding|INFO|Releasing lport 54600d4f-e167-4eaf-830f-ddc1c402909e from this chassis (sb_readonly=0)
Nov 23 21:08:25 compute-1 ovn_controller[132845]: 2025-11-23T21:08:25Z|00051|binding|INFO|Releasing lport 54600d4f-e167-4eaf-830f-ddc1c402909e from this chassis (sb_readonly=0)
Nov 23 21:08:25 compute-1 nova_compute[230183]: 2025-11-23 21:08:25.684 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:25 compute-1 nova_compute[230183]: 2025-11-23 21:08:25.688 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:26.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:26 compute-1 nova_compute[230183]: 2025-11-23 21:08:26.232 230187 DEBUG nova.compute.manager [req-150712fc-1986-49b8-a9cd-6ffa4a9b7be7 req-19bdc11d-cd0e-42de-be75-054d2493d33a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-changed-932faebb-b274-4e17-94a9-9339a27c275f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:08:26 compute-1 nova_compute[230183]: 2025-11-23 21:08:26.233 230187 DEBUG nova.compute.manager [req-150712fc-1986-49b8-a9cd-6ffa4a9b7be7 req-19bdc11d-cd0e-42de-be75-054d2493d33a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Refreshing instance network info cache due to event network-changed-932faebb-b274-4e17-94a9-9339a27c275f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 23 21:08:26 compute-1 nova_compute[230183]: 2025-11-23 21:08:26.233 230187 DEBUG oslo_concurrency.lockutils [req-150712fc-1986-49b8-a9cd-6ffa4a9b7be7 req-19bdc11d-cd0e-42de-be75-054d2493d33a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:08:26 compute-1 nova_compute[230183]: 2025-11-23 21:08:26.233 230187 DEBUG oslo_concurrency.lockutils [req-150712fc-1986-49b8-a9cd-6ffa4a9b7be7 req-19bdc11d-cd0e-42de-be75-054d2493d33a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:08:26 compute-1 nova_compute[230183]: 2025-11-23 21:08:26.233 230187 DEBUG nova.network.neutron [req-150712fc-1986-49b8-a9cd-6ffa4a9b7be7 req-19bdc11d-cd0e-42de-be75-054d2493d33a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Refreshing network info cache for port 932faebb-b274-4e17-94a9-9339a27c275f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 23 21:08:26 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:08:26 compute-1 ceph-mon[80135]: pgmap v810: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 23 21:08:26 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:08:26 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 21:08:26 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:08:26 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:08:26 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 21:08:26 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 21:08:26 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:08:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:08:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:26.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:08:26 compute-1 nova_compute[230183]: 2025-11-23 21:08:26.938 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:28.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:28 compute-1 ceph-mon[80135]: pgmap v811: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 23 21:08:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:28.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:29 compute-1 nova_compute[230183]: 2025-11-23 21:08:29.542 230187 DEBUG nova.network.neutron [req-150712fc-1986-49b8-a9cd-6ffa4a9b7be7 req-19bdc11d-cd0e-42de-be75-054d2493d33a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Updated VIF entry in instance network info cache for port 932faebb-b274-4e17-94a9-9339a27c275f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 23 21:08:29 compute-1 nova_compute[230183]: 2025-11-23 21:08:29.543 230187 DEBUG nova.network.neutron [req-150712fc-1986-49b8-a9cd-6ffa4a9b7be7 req-19bdc11d-cd0e-42de-be75-054d2493d33a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Updating instance_info_cache with network_info: [{"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:08:29 compute-1 nova_compute[230183]: 2025-11-23 21:08:29.565 230187 DEBUG oslo_concurrency.lockutils [req-150712fc-1986-49b8-a9cd-6ffa4a9b7be7 req-19bdc11d-cd0e-42de-be75-054d2493d33a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:08:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:30.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:30 compute-1 sudo[235323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 21:08:30 compute-1 sudo[235323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:08:30 compute-1 sudo[235323]: pam_unix(sudo:session): session closed for user root
Nov 23 21:08:30 compute-1 nova_compute[230183]: 2025-11-23 21:08:30.386 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:30 compute-1 ceph-mon[80135]: pgmap v812: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 23 21:08:30 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:08:30 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:08:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:08:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:30.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:08:31 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:08:31 compute-1 nova_compute[230183]: 2025-11-23 21:08:31.915 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:32.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:32 compute-1 ceph-mon[80135]: pgmap v813: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1004 KiB/s wr, 77 op/s
Nov 23 21:08:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:32.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:08:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:08:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:34.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:08:34 compute-1 ceph-mon[80135]: pgmap v814: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 23 21:08:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:08:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:34.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:08:35 compute-1 nova_compute[230183]: 2025-11-23 21:08:35.391 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:35 compute-1 ceph-mon[80135]: pgmap v815: 337 pgs: 337 active+clean; 109 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 111 op/s
Nov 23 21:08:36 compute-1 ovn_controller[132845]: 2025-11-23T21:08:36Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:22:80:b0 10.100.0.5
Nov 23 21:08:36 compute-1 ovn_controller[132845]: 2025-11-23T21:08:36Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:22:80:b0 10.100.0.5
Nov 23 21:08:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:36.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:36 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:08:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:36.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:36 compute-1 nova_compute[230183]: 2025-11-23 21:08:36.959 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:38.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:38 compute-1 ceph-mon[80135]: pgmap v816: 337 pgs: 337 active+clean; 109 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 2.0 MiB/s wr, 37 op/s
Nov 23 21:08:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:38.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:40.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:40 compute-1 ceph-mon[80135]: pgmap v817: 337 pgs: 337 active+clean; 113 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 167 KiB/s rd, 2.0 MiB/s wr, 49 op/s
Nov 23 21:08:40 compute-1 nova_compute[230183]: 2025-11-23 21:08:40.439 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:08:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:40.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:08:41 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:08:41 compute-1 nova_compute[230183]: 2025-11-23 21:08:41.965 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:42.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:42 compute-1 ceph-mon[80135]: pgmap v818: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 209 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Nov 23 21:08:42 compute-1 nova_compute[230183]: 2025-11-23 21:08:42.798 230187 INFO nova.compute.manager [None req-f6e381c6-246a-4963-ac73-71e7bb9aa240 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Get console output
Nov 23 21:08:42 compute-1 nova_compute[230183]: 2025-11-23 21:08:42.803 234120 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 23 21:08:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:08:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:42.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:08:44 compute-1 sudo[235355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:08:44 compute-1 sudo[235355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:08:44 compute-1 sudo[235355]: pam_unix(sudo:session): session closed for user root
Nov 23 21:08:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:08:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:44.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:08:44 compute-1 ceph-mon[80135]: pgmap v819: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 209 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Nov 23 21:08:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:44.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:45 compute-1 nova_compute[230183]: 2025-11-23 21:08:45.505 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:46.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:46 compute-1 ceph-mon[80135]: pgmap v820: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 210 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 23 21:08:46 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:08:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:08:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:46.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:08:46 compute-1 nova_compute[230183]: 2025-11-23 21:08:46.967 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:47 compute-1 podman[235383]: 2025-11-23 21:08:47.64484625 +0000 UTC m=+0.052706683 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible)
Nov 23 21:08:47 compute-1 podman[235382]: 2025-11-23 21:08:47.679679974 +0000 UTC m=+0.089824211 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 21:08:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:48.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:48 compute-1 ceph-mon[80135]: pgmap v821: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 107 KiB/s wr, 21 op/s
Nov 23 21:08:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:08:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:48.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:49 compute-1 nova_compute[230183]: 2025-11-23 21:08:49.116 230187 DEBUG oslo_concurrency.lockutils [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "interface-451aa9f7-4cd0-413e-beed-8a30a8685ff1-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:08:49 compute-1 nova_compute[230183]: 2025-11-23 21:08:49.117 230187 DEBUG oslo_concurrency.lockutils [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "interface-451aa9f7-4cd0-413e-beed-8a30a8685ff1-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:08:49 compute-1 nova_compute[230183]: 2025-11-23 21:08:49.117 230187 DEBUG nova.objects.instance [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'flavor' on Instance uuid 451aa9f7-4cd0-413e-beed-8a30a8685ff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:08:50 compute-1 nova_compute[230183]: 2025-11-23 21:08:50.011 230187 DEBUG nova.objects.instance [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'pci_requests' on Instance uuid 451aa9f7-4cd0-413e-beed-8a30a8685ff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:08:50 compute-1 nova_compute[230183]: 2025-11-23 21:08:50.025 230187 DEBUG nova.network.neutron [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 23 21:08:50 compute-1 nova_compute[230183]: 2025-11-23 21:08:50.211 230187 DEBUG nova.policy [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9fb5352c62684f2ba3a326a953a10dfe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '782593db60784ab8bff41fe87d72ff5f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 23 21:08:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:50.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:50 compute-1 ceph-mon[80135]: pgmap v822: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 107 KiB/s wr, 22 op/s
Nov 23 21:08:50 compute-1 nova_compute[230183]: 2025-11-23 21:08:50.515 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:50.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:51 compute-1 nova_compute[230183]: 2025-11-23 21:08:51.046 230187 DEBUG nova.network.neutron [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Successfully created port: c1f5466b-7cb0-4db1-aacf-c88bf808a51a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 23 21:08:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:51.066 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:08:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:51.066 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:08:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:51.067 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:08:51 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:08:51 compute-1 podman[235428]: 2025-11-23 21:08:51.656655395 +0000 UTC m=+0.060580360 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 23 21:08:51 compute-1 nova_compute[230183]: 2025-11-23 21:08:51.959 230187 DEBUG nova.network.neutron [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Successfully updated port: c1f5466b-7cb0-4db1-aacf-c88bf808a51a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 23 21:08:51 compute-1 nova_compute[230183]: 2025-11-23 21:08:51.971 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:51 compute-1 nova_compute[230183]: 2025-11-23 21:08:51.982 230187 DEBUG oslo_concurrency.lockutils [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:08:51 compute-1 nova_compute[230183]: 2025-11-23 21:08:51.983 230187 DEBUG oslo_concurrency.lockutils [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquired lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:08:51 compute-1 nova_compute[230183]: 2025-11-23 21:08:51.983 230187 DEBUG nova.network.neutron [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 23 21:08:52 compute-1 nova_compute[230183]: 2025-11-23 21:08:52.076 230187 DEBUG nova.compute.manager [req-85b57734-87bd-48c8-b714-7bc6086cf1e7 req-4ed76ce1-8c6c-4044-8dc1-ffe495992e34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-changed-c1f5466b-7cb0-4db1-aacf-c88bf808a51a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:08:52 compute-1 nova_compute[230183]: 2025-11-23 21:08:52.076 230187 DEBUG nova.compute.manager [req-85b57734-87bd-48c8-b714-7bc6086cf1e7 req-4ed76ce1-8c6c-4044-8dc1-ffe495992e34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Refreshing instance network info cache due to event network-changed-c1f5466b-7cb0-4db1-aacf-c88bf808a51a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 23 21:08:52 compute-1 nova_compute[230183]: 2025-11-23 21:08:52.077 230187 DEBUG oslo_concurrency.lockutils [req-85b57734-87bd-48c8-b714-7bc6086cf1e7 req-4ed76ce1-8c6c-4044-8dc1-ffe495992e34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:08:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:52.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:52 compute-1 ceph-mon[80135]: pgmap v823: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 104 KiB/s wr, 10 op/s
Nov 23 21:08:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:52.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.043 230187 DEBUG nova.network.neutron [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Updating instance_info_cache with network_info: [{"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.063 230187 DEBUG oslo_concurrency.lockutils [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Releasing lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.065 230187 DEBUG oslo_concurrency.lockutils [req-85b57734-87bd-48c8-b714-7bc6086cf1e7 req-4ed76ce1-8c6c-4044-8dc1-ffe495992e34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.065 230187 DEBUG nova.network.neutron [req-85b57734-87bd-48c8-b714-7bc6086cf1e7 req-4ed76ce1-8c6c-4044-8dc1-ffe495992e34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Refreshing network info cache for port c1f5466b-7cb0-4db1-aacf-c88bf808a51a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.069 230187 DEBUG nova.virt.libvirt.vif [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:08:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-163368612',display_name='tempest-TestNetworkBasicOps-server-163368612',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-163368612',id=3,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO6ZIj438fQUpVfUUeh9lapkxwknyZNU4rtkhiTUYmBPGvkJZXNdDf4srslhWKNNtoBf1C2D4cd/jBUBjs52xRw75wPIQzFCZ8VrPBNO0yEc0UePTukzbeBIVnoSLQIebA==',key_name='tempest-TestNetworkBasicOps-1883192829',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:08:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-ptm322on',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:08:22Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=451aa9f7-4cd0-413e-beed-8a30a8685ff1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.070 230187 DEBUG nova.network.os_vif_util [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.071 230187 DEBUG nova.network.os_vif_util [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:5e:db,bridge_name='br-int',has_traffic_filtering=True,id=c1f5466b-7cb0-4db1-aacf-c88bf808a51a,network=Network(c71c794f-3bb9-41ea-bd53-fb4d0511d891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1f5466b-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.072 230187 DEBUG os_vif [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:5e:db,bridge_name='br-int',has_traffic_filtering=True,id=c1f5466b-7cb0-4db1-aacf-c88bf808a51a,network=Network(c71c794f-3bb9-41ea-bd53-fb4d0511d891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1f5466b-7c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.073 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.073 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.074 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.079 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.079 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1f5466b-7c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.080 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc1f5466b-7c, col_values=(('external_ids', {'iface-id': 'c1f5466b-7cb0-4db1-aacf-c88bf808a51a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c6:5e:db', 'vm-uuid': '451aa9f7-4cd0-413e-beed-8a30a8685ff1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.081 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:54 compute-1 NetworkManager[49021]: <info>  [1763932134.0821] manager: (tapc1f5466b-7c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.082 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.088 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.090 230187 INFO os_vif [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:5e:db,bridge_name='br-int',has_traffic_filtering=True,id=c1f5466b-7cb0-4db1-aacf-c88bf808a51a,network=Network(c71c794f-3bb9-41ea-bd53-fb4d0511d891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1f5466b-7c')
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.091 230187 DEBUG nova.virt.libvirt.vif [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:08:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-163368612',display_name='tempest-TestNetworkBasicOps-server-163368612',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-163368612',id=3,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO6ZIj438fQUpVfUUeh9lapkxwknyZNU4rtkhiTUYmBPGvkJZXNdDf4srslhWKNNtoBf1C2D4cd/jBUBjs52xRw75wPIQzFCZ8VrPBNO0yEc0UePTukzbeBIVnoSLQIebA==',key_name='tempest-TestNetworkBasicOps-1883192829',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:08:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-ptm322on',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:08:22Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=451aa9f7-4cd0-413e-beed-8a30a8685ff1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.091 230187 DEBUG nova.network.os_vif_util [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.092 230187 DEBUG nova.network.os_vif_util [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:5e:db,bridge_name='br-int',has_traffic_filtering=True,id=c1f5466b-7cb0-4db1-aacf-c88bf808a51a,network=Network(c71c794f-3bb9-41ea-bd53-fb4d0511d891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1f5466b-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.095 230187 DEBUG nova.virt.libvirt.guest [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] attach device xml: <interface type="ethernet">
Nov 23 21:08:54 compute-1 nova_compute[230183]:   <mac address="fa:16:3e:c6:5e:db"/>
Nov 23 21:08:54 compute-1 nova_compute[230183]:   <model type="virtio"/>
Nov 23 21:08:54 compute-1 nova_compute[230183]:   <driver name="vhost" rx_queue_size="512"/>
Nov 23 21:08:54 compute-1 nova_compute[230183]:   <mtu size="1442"/>
Nov 23 21:08:54 compute-1 nova_compute[230183]:   <target dev="tapc1f5466b-7c"/>
Nov 23 21:08:54 compute-1 nova_compute[230183]: </interface>
Nov 23 21:08:54 compute-1 nova_compute[230183]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 23 21:08:54 compute-1 kernel: tapc1f5466b-7c: entered promiscuous mode
Nov 23 21:08:54 compute-1 NetworkManager[49021]: <info>  [1763932134.1104] manager: (tapc1f5466b-7c): new Tun device (/org/freedesktop/NetworkManager/Devices/39)
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.111 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:54 compute-1 ovn_controller[132845]: 2025-11-23T21:08:54Z|00052|binding|INFO|Claiming lport c1f5466b-7cb0-4db1-aacf-c88bf808a51a for this chassis.
Nov 23 21:08:54 compute-1 ovn_controller[132845]: 2025-11-23T21:08:54Z|00053|binding|INFO|c1f5466b-7cb0-4db1-aacf-c88bf808a51a: Claiming fa:16:3e:c6:5e:db 10.100.0.25
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.127 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:5e:db 10.100.0.25'], port_security=['fa:16:3e:c6:5e:db 10.100.0.25'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.25/28', 'neutron:device_id': '451aa9f7-4cd0-413e-beed-8a30a8685ff1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c71c794f-3bb9-41ea-bd53-fb4d0511d891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cfd1f7f1-25d4-42fe-ac59-ece898bff9bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a70406db-79d7-4319-98a3-b89293d6f5cb, chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=c1f5466b-7cb0-4db1-aacf-c88bf808a51a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.129 142158 INFO neutron.agent.ovn.metadata.agent [-] Port c1f5466b-7cb0-4db1-aacf-c88bf808a51a in datapath c71c794f-3bb9-41ea-bd53-fb4d0511d891 bound to our chassis
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.130 142158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c71c794f-3bb9-41ea-bd53-fb4d0511d891
Nov 23 21:08:54 compute-1 systemd-udevd[235459]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.140 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[f3ef10b6-3a50-4fac-bc23-802432c274b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.141 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc71c794f-31 in ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.142 233901 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc71c794f-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.143 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5f8a5881-9428-4b7d-bb6a-a978f70c70d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.143 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[ef2f9aa7-8518-4ba6-a647-febbf4c29161]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:54 compute-1 NetworkManager[49021]: <info>  [1763932134.1547] device (tapc1f5466b-7c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.155 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[1ea66074-dafa-4b9a-af36-392134f8a2c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.159 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:54 compute-1 NetworkManager[49021]: <info>  [1763932134.1601] device (tapc1f5466b-7c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 23 21:08:54 compute-1 ovn_controller[132845]: 2025-11-23T21:08:54Z|00054|binding|INFO|Setting lport c1f5466b-7cb0-4db1-aacf-c88bf808a51a ovn-installed in OVS
Nov 23 21:08:54 compute-1 ovn_controller[132845]: 2025-11-23T21:08:54Z|00055|binding|INFO|Setting lport c1f5466b-7cb0-4db1-aacf-c88bf808a51a up in Southbound
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.164 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.169 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[e2f5ba08-68ed-4471-acac-472a5299af3e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.195 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[afad81f5-28ec-4c7b-9ae8-d0ef637266ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:54 compute-1 NetworkManager[49021]: <info>  [1763932134.2028] manager: (tapc71c794f-30): new Veth device (/org/freedesktop/NetworkManager/Devices/40)
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.203 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[d204e0ab-e54f-41b9-8103-c0eaab5ea1fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.206 230187 DEBUG nova.virt.libvirt.driver [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.206 230187 DEBUG nova.virt.libvirt.driver [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.207 230187 DEBUG nova.virt.libvirt.driver [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No VIF found with MAC fa:16:3e:22:80:b0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.207 230187 DEBUG nova.virt.libvirt.driver [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No VIF found with MAC fa:16:3e:c6:5e:db, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.229 230187 DEBUG nova.virt.libvirt.guest [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 21:08:54 compute-1 nova_compute[230183]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 21:08:54 compute-1 nova_compute[230183]:   <nova:name>tempest-TestNetworkBasicOps-server-163368612</nova:name>
Nov 23 21:08:54 compute-1 nova_compute[230183]:   <nova:creationTime>2025-11-23 21:08:54</nova:creationTime>
Nov 23 21:08:54 compute-1 nova_compute[230183]:   <nova:flavor name="m1.nano">
Nov 23 21:08:54 compute-1 nova_compute[230183]:     <nova:memory>128</nova:memory>
Nov 23 21:08:54 compute-1 nova_compute[230183]:     <nova:disk>1</nova:disk>
Nov 23 21:08:54 compute-1 nova_compute[230183]:     <nova:swap>0</nova:swap>
Nov 23 21:08:54 compute-1 nova_compute[230183]:     <nova:ephemeral>0</nova:ephemeral>
Nov 23 21:08:54 compute-1 nova_compute[230183]:     <nova:vcpus>1</nova:vcpus>
Nov 23 21:08:54 compute-1 nova_compute[230183]:   </nova:flavor>
Nov 23 21:08:54 compute-1 nova_compute[230183]:   <nova:owner>
Nov 23 21:08:54 compute-1 nova_compute[230183]:     <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 21:08:54 compute-1 nova_compute[230183]:     <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 21:08:54 compute-1 nova_compute[230183]:   </nova:owner>
Nov 23 21:08:54 compute-1 nova_compute[230183]:   <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 21:08:54 compute-1 nova_compute[230183]:   <nova:ports>
Nov 23 21:08:54 compute-1 nova_compute[230183]:     <nova:port uuid="932faebb-b274-4e17-94a9-9339a27c275f">
Nov 23 21:08:54 compute-1 nova_compute[230183]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 23 21:08:54 compute-1 nova_compute[230183]:     </nova:port>
Nov 23 21:08:54 compute-1 nova_compute[230183]:     <nova:port uuid="c1f5466b-7cb0-4db1-aacf-c88bf808a51a">
Nov 23 21:08:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:54 compute-1 nova_compute[230183]:       <nova:ip type="fixed" address="10.100.0.25" ipVersion="4"/>
Nov 23 21:08:54 compute-1 nova_compute[230183]:     </nova:port>
Nov 23 21:08:54 compute-1 nova_compute[230183]:   </nova:ports>
Nov 23 21:08:54 compute-1 nova_compute[230183]: </nova:instance>
Nov 23 21:08:54 compute-1 nova_compute[230183]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 23 21:08:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:54.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.233 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[d089dfaa-3d1b-4eea-97fe-e79bffdbed92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.238 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[d9895fc2-9a95-4d89-bb52-003e8fdeed23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.250 230187 DEBUG oslo_concurrency.lockutils [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "interface-451aa9f7-4cd0-413e-beed-8a30a8685ff1-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:08:54 compute-1 NetworkManager[49021]: <info>  [1763932134.2596] device (tapc71c794f-30): carrier: link connected
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.264 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[c7d9da9f-a416-444c-ac87-8e7d46007bfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.280 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[fde95b44-8629-4b2f-9e6f-b60371572a19]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc71c794f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:c2:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 408283, 'reachable_time': 27767, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235485, 'error': None, 'target': 'ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.295 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[9adee06c-b191-4721-a6cd-c163f3159624]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7d:c2a3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 408283, 'tstamp': 408283}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 235486, 'error': None, 'target': 'ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.310 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[4b270c3e-2cf7-4bf6-a9fc-5876587f9e1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc71c794f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:c2:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 408283, 'reachable_time': 27767, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 235487, 'error': None, 'target': 'ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.339 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[8c558db2-7eae-447f-a332-93dc7d4fe1e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.394 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[fc10d943-52ce-4751-bd9b-55f6d348e48b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.395 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc71c794f-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.396 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.396 230187 DEBUG nova.compute.manager [req-c75b1380-da47-415c-a736-26a94d8ce267 req-5d88ab42-966f-4fb1-a9cf-cbce3d979ed0 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-vif-plugged-c1f5466b-7cb0-4db1-aacf-c88bf808a51a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.396 230187 DEBUG oslo_concurrency.lockutils [req-c75b1380-da47-415c-a736-26a94d8ce267 req-5d88ab42-966f-4fb1-a9cf-cbce3d979ed0 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.396 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc71c794f-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.396 230187 DEBUG oslo_concurrency.lockutils [req-c75b1380-da47-415c-a736-26a94d8ce267 req-5d88ab42-966f-4fb1-a9cf-cbce3d979ed0 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.397 230187 DEBUG oslo_concurrency.lockutils [req-c75b1380-da47-415c-a736-26a94d8ce267 req-5d88ab42-966f-4fb1-a9cf-cbce3d979ed0 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.397 230187 DEBUG nova.compute.manager [req-c75b1380-da47-415c-a736-26a94d8ce267 req-5d88ab42-966f-4fb1-a9cf-cbce3d979ed0 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] No waiting events found dispatching network-vif-plugged-c1f5466b-7cb0-4db1-aacf-c88bf808a51a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.397 230187 WARNING nova.compute.manager [req-c75b1380-da47-415c-a736-26a94d8ce267 req-5d88ab42-966f-4fb1-a9cf-cbce3d979ed0 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received unexpected event network-vif-plugged-c1f5466b-7cb0-4db1-aacf-c88bf808a51a for instance with vm_state active and task_state None.
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.398 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:54 compute-1 NetworkManager[49021]: <info>  [1763932134.3990] manager: (tapc71c794f-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Nov 23 21:08:54 compute-1 kernel: tapc71c794f-30: entered promiscuous mode
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.403 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc71c794f-30, col_values=(('external_ids', {'iface-id': '5df25d22-b106-405a-b6b2-c3bf4fd41e45'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:08:54 compute-1 ovn_controller[132845]: 2025-11-23T21:08:54Z|00056|binding|INFO|Releasing lport 5df25d22-b106-405a-b6b2-c3bf4fd41e45 from this chassis (sb_readonly=0)
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.404 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.405 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.407 142158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c71c794f-3bb9-41ea-bd53-fb4d0511d891.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c71c794f-3bb9-41ea-bd53-fb4d0511d891.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.408 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[57c90d2f-db35-4b63-b0e2-91e70754a46f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.408 142158 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: global
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]:     log         /dev/log local0 debug
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]:     log-tag     haproxy-metadata-proxy-c71c794f-3bb9-41ea-bd53-fb4d0511d891
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]:     user        root
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]:     group       root
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]:     maxconn     1024
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]:     pidfile     /var/lib/neutron/external/pids/c71c794f-3bb9-41ea-bd53-fb4d0511d891.pid.haproxy
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]:     daemon
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: defaults
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]:     log global
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]:     mode http
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]:     option httplog
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]:     option dontlognull
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]:     option http-server-close
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]:     option forwardfor
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]:     retries                 3
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]:     timeout http-request    30s
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]:     timeout connect         30s
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]:     timeout client          32s
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]:     timeout server          32s
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]:     timeout http-keep-alive 30s
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: listen listener
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]:     bind 169.254.169.254:80
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]:     server metadata /var/lib/neutron/metadata_proxy
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]:     http-request add-header X-OVN-Network-ID c71c794f-3bb9-41ea-bd53-fb4d0511d891
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 23 21:08:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.409 142158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891', 'env', 'PROCESS_TAG=haproxy-c71c794f-3bb9-41ea-bd53-fb4d0511d891', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c71c794f-3bb9-41ea-bd53-fb4d0511d891.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 23 21:08:54 compute-1 nova_compute[230183]: 2025-11-23 21:08:54.417 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:54 compute-1 ceph-mon[80135]: pgmap v824: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 1 op/s
Nov 23 21:08:54 compute-1 podman[235519]: 2025-11-23 21:08:54.781738149 +0000 UTC m=+0.058072741 container create dea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 21:08:54 compute-1 systemd[1]: Started libpod-conmon-dea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507.scope.
Nov 23 21:08:54 compute-1 podman[235519]: 2025-11-23 21:08:54.753467875 +0000 UTC m=+0.029802507 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 21:08:54 compute-1 systemd[1]: Started libcrun container.
Nov 23 21:08:54 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09efd6aef4de0b9fee32aa5a46b9f13dd619c92f7667aa4d49a98b02a6bce3c0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 21:08:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:08:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:54.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:08:54 compute-1 podman[235519]: 2025-11-23 21:08:54.876024732 +0000 UTC m=+0.152359364 container init dea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 23 21:08:54 compute-1 podman[235519]: 2025-11-23 21:08:54.883380066 +0000 UTC m=+0.159714668 container start dea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 21:08:54 compute-1 neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891[235534]: [NOTICE]   (235538) : New worker (235540) forked
Nov 23 21:08:54 compute-1 neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891[235534]: [NOTICE]   (235538) : Loading success.
Nov 23 21:08:55 compute-1 sshd-session[235449]: Connection closed by authenticating user root 80.94.95.116 port 56592 [preauth]
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.246 230187 DEBUG nova.network.neutron [req-85b57734-87bd-48c8-b714-7bc6086cf1e7 req-4ed76ce1-8c6c-4044-8dc1-ffe495992e34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Updated VIF entry in instance network info cache for port c1f5466b-7cb0-4db1-aacf-c88bf808a51a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.247 230187 DEBUG nova.network.neutron [req-85b57734-87bd-48c8-b714-7bc6086cf1e7 req-4ed76ce1-8c6c-4044-8dc1-ffe495992e34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Updating instance_info_cache with network_info: [{"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.260 230187 DEBUG oslo_concurrency.lockutils [req-85b57734-87bd-48c8-b714-7bc6086cf1e7 req-4ed76ce1-8c6c-4044-8dc1-ffe495992e34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.629 230187 DEBUG oslo_concurrency.lockutils [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "interface-451aa9f7-4cd0-413e-beed-8a30a8685ff1-c1f5466b-7cb0-4db1-aacf-c88bf808a51a" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.629 230187 DEBUG oslo_concurrency.lockutils [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "interface-451aa9f7-4cd0-413e-beed-8a30a8685ff1-c1f5466b-7cb0-4db1-aacf-c88bf808a51a" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.649 230187 DEBUG nova.objects.instance [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'flavor' on Instance uuid 451aa9f7-4cd0-413e-beed-8a30a8685ff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.664 230187 DEBUG nova.virt.libvirt.vif [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:08:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-163368612',display_name='tempest-TestNetworkBasicOps-server-163368612',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-163368612',id=3,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO6ZIj438fQUpVfUUeh9lapkxwknyZNU4rtkhiTUYmBPGvkJZXNdDf4srslhWKNNtoBf1C2D4cd/jBUBjs52xRw75wPIQzFCZ8VrPBNO0yEc0UePTukzbeBIVnoSLQIebA==',key_name='tempest-TestNetworkBasicOps-1883192829',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:08:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-ptm322on',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:08:22Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=451aa9f7-4cd0-413e-beed-8a30a8685ff1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.664 230187 DEBUG nova.network.os_vif_util [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.665 230187 DEBUG nova.network.os_vif_util [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:5e:db,bridge_name='br-int',has_traffic_filtering=True,id=c1f5466b-7cb0-4db1-aacf-c88bf808a51a,network=Network(c71c794f-3bb9-41ea-bd53-fb4d0511d891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1f5466b-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.669 230187 DEBUG nova.virt.libvirt.guest [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c6:5e:db"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc1f5466b-7c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 23 21:08:55 compute-1 ovn_controller[132845]: 2025-11-23T21:08:55Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c6:5e:db 10.100.0.25
Nov 23 21:08:55 compute-1 ovn_controller[132845]: 2025-11-23T21:08:55Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c6:5e:db 10.100.0.25
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.672 230187 DEBUG nova.virt.libvirt.guest [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c6:5e:db"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc1f5466b-7c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.674 230187 DEBUG nova.virt.libvirt.driver [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Attempting to detach device tapc1f5466b-7c from instance 451aa9f7-4cd0-413e-beed-8a30a8685ff1 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.675 230187 DEBUG nova.virt.libvirt.guest [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] detach device xml: <interface type="ethernet">
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <mac address="fa:16:3e:c6:5e:db"/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <model type="virtio"/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <driver name="vhost" rx_queue_size="512"/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <mtu size="1442"/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <target dev="tapc1f5466b-7c"/>
Nov 23 21:08:55 compute-1 nova_compute[230183]: </interface>
Nov 23 21:08:55 compute-1 nova_compute[230183]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.679 230187 DEBUG nova.virt.libvirt.guest [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c6:5e:db"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc1f5466b-7c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.683 230187 DEBUG nova.virt.libvirt.guest [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:c6:5e:db"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc1f5466b-7c"/></interface>not found in domain: <domain type='kvm' id='2'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <name>instance-00000003</name>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <uuid>451aa9f7-4cd0-413e-beed-8a30a8685ff1</uuid>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <metadata>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <nova:name>tempest-TestNetworkBasicOps-server-163368612</nova:name>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <nova:creationTime>2025-11-23 21:08:54</nova:creationTime>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <nova:flavor name="m1.nano">
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:memory>128</nova:memory>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:disk>1</nova:disk>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:swap>0</nova:swap>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:ephemeral>0</nova:ephemeral>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:vcpus>1</nova:vcpus>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </nova:flavor>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <nova:owner>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </nova:owner>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <nova:ports>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:port uuid="932faebb-b274-4e17-94a9-9339a27c275f">
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </nova:port>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:port uuid="c1f5466b-7cb0-4db1-aacf-c88bf808a51a">
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <nova:ip type="fixed" address="10.100.0.25" ipVersion="4"/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </nova:port>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </nova:ports>
Nov 23 21:08:55 compute-1 nova_compute[230183]: </nova:instance>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </metadata>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <memory unit='KiB'>131072</memory>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <vcpu placement='static'>1</vcpu>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <resource>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <partition>/machine</partition>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </resource>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <sysinfo type='smbios'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <system>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <entry name='manufacturer'>RDO</entry>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <entry name='product'>OpenStack Compute</entry>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <entry name='serial'>451aa9f7-4cd0-413e-beed-8a30a8685ff1</entry>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <entry name='uuid'>451aa9f7-4cd0-413e-beed-8a30a8685ff1</entry>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <entry name='family'>Virtual Machine</entry>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </system>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </sysinfo>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <os>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <boot dev='hd'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <smbios mode='sysinfo'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </os>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <features>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <acpi/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <apic/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <vmcoreinfo state='on'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </features>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <cpu mode='custom' match='exact' check='full'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <vendor>AMD</vendor>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='x2apic'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='tsc-deadline'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='hypervisor'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='tsc_adjust'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='spec-ctrl'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='stibp'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='ssbd'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='cmp_legacy'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='overflow-recov'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='succor'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='ibrs'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='amd-ssbd'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='virt-ssbd'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='disable' name='lbrv'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='disable' name='tsc-scale'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='disable' name='vmcb-clean'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='disable' name='flushbyasid'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='disable' name='pause-filter'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='disable' name='pfthreshold'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='disable' name='xsaves'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='disable' name='svm'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='topoext'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='disable' name='npt'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='disable' name='nrip-save'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </cpu>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <clock offset='utc'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <timer name='pit' tickpolicy='delay'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <timer name='hpet' present='no'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </clock>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <on_poweroff>destroy</on_poweroff>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <on_reboot>restart</on_reboot>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <on_crash>destroy</on_crash>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <devices>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <disk type='network' device='disk'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <driver name='qemu' type='raw' cache='none'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <auth username='openstack'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:         <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <source protocol='rbd' name='vms/451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk' index='2'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:         <host name='192.168.122.100' port='6789'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:         <host name='192.168.122.102' port='6789'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:         <host name='192.168.122.101' port='6789'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       </source>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target dev='vda' bus='virtio'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='virtio-disk0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <disk type='network' device='cdrom'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <driver name='qemu' type='raw' cache='none'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <auth username='openstack'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:         <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <source protocol='rbd' name='vms/451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk.config' index='1'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:         <host name='192.168.122.100' port='6789'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:         <host name='192.168.122.102' port='6789'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:         <host name='192.168.122.101' port='6789'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       </source>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target dev='sda' bus='sata'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <readonly/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='sata0-0-0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='0' model='pcie-root'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pcie.0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='1' port='0x10'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.1'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='2' port='0x11'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.2'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='3' port='0x12'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.3'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='4' port='0x13'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.4'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='5' port='0x14'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.5'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='6' port='0x15'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.6'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='7' port='0x16'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.7'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='8' port='0x17'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.8'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='9' port='0x18'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.9'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='10' port='0x19'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.10'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='11' port='0x1a'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.11'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='12' port='0x1b'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.12'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='13' port='0x1c'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.13'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='14' port='0x1d'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.14'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='15' port='0x1e'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.15'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='16' port='0x1f'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.16'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='17' port='0x20'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.17'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='18' port='0x21'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.18'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='19' port='0x22'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.19'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='20' port='0x23'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.20'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='21' port='0x24'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.21'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='22' port='0x25'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.22'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='23' port='0x26'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.23'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='24' port='0x27'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.24'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='25' port='0x28'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.25'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-pci-bridge'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.26'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='usb'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='sata' index='0'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='ide'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <interface type='ethernet'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <mac address='fa:16:3e:22:80:b0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target dev='tap932faebb-b2'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model type='virtio'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <driver name='vhost' rx_queue_size='512'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <mtu size='1442'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='net0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </interface>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <interface type='ethernet'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <mac address='fa:16:3e:c6:5e:db'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target dev='tapc1f5466b-7c'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model type='virtio'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <driver name='vhost' rx_queue_size='512'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <mtu size='1442'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='net1'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </interface>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <serial type='pty'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <source path='/dev/pts/0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <log file='/var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/console.log' append='off'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target type='isa-serial' port='0'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:         <model name='isa-serial'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       </target>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='serial0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </serial>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <console type='pty' tty='/dev/pts/0'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <source path='/dev/pts/0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <log file='/var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/console.log' append='off'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target type='serial' port='0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='serial0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </console>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <input type='tablet' bus='usb'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='input0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='usb' bus='0' port='1'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </input>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <input type='mouse' bus='ps2'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='input1'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </input>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <input type='keyboard' bus='ps2'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='input2'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </input>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <listen type='address' address='::0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </graphics>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <audio id='1' type='none'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <video>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model type='virtio' heads='1' primary='yes'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='video0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </video>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <watchdog model='itco' action='reset'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='watchdog0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </watchdog>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <memballoon model='virtio'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <stats period='10'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='balloon0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </memballoon>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <rng model='virtio'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <backend model='random'>/dev/urandom</backend>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='rng0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </rng>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </devices>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <label>system_u:system_r:svirt_t:s0:c591,c609</label>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c591,c609</imagelabel>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </seclabel>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <label>+107:+107</label>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <imagelabel>+107:+107</imagelabel>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </seclabel>
Nov 23 21:08:55 compute-1 nova_compute[230183]: </domain>
Nov 23 21:08:55 compute-1 nova_compute[230183]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.683 230187 INFO nova.virt.libvirt.driver [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully detached device tapc1f5466b-7c from instance 451aa9f7-4cd0-413e-beed-8a30a8685ff1 from the persistent domain config.
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.683 230187 DEBUG nova.virt.libvirt.driver [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] (1/8): Attempting to detach device tapc1f5466b-7c with device alias net1 from instance 451aa9f7-4cd0-413e-beed-8a30a8685ff1 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.684 230187 DEBUG nova.virt.libvirt.guest [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] detach device xml: <interface type="ethernet">
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <mac address="fa:16:3e:c6:5e:db"/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <model type="virtio"/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <driver name="vhost" rx_queue_size="512"/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <mtu size="1442"/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <target dev="tapc1f5466b-7c"/>
Nov 23 21:08:55 compute-1 nova_compute[230183]: </interface>
Nov 23 21:08:55 compute-1 nova_compute[230183]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 23 21:08:55 compute-1 kernel: tapc1f5466b-7c (unregistering): left promiscuous mode
Nov 23 21:08:55 compute-1 NetworkManager[49021]: <info>  [1763932135.7803] device (tapc1f5466b-7c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 23 21:08:55 compute-1 ovn_controller[132845]: 2025-11-23T21:08:55Z|00057|binding|INFO|Releasing lport c1f5466b-7cb0-4db1-aacf-c88bf808a51a from this chassis (sb_readonly=0)
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.786 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:55 compute-1 ovn_controller[132845]: 2025-11-23T21:08:55Z|00058|binding|INFO|Setting lport c1f5466b-7cb0-4db1-aacf-c88bf808a51a down in Southbound
Nov 23 21:08:55 compute-1 ovn_controller[132845]: 2025-11-23T21:08:55Z|00059|binding|INFO|Removing iface tapc1f5466b-7c ovn-installed in OVS
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.788 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.790 230187 DEBUG nova.virt.libvirt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Received event <DeviceRemovedEvent: 1763932135.7897394, 451aa9f7-4cd0-413e-beed-8a30a8685ff1 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.792 230187 DEBUG nova.virt.libvirt.driver [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Start waiting for the detach event from libvirt for device tapc1f5466b-7c with device alias net1 for instance 451aa9f7-4cd0-413e-beed-8a30a8685ff1 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.792 230187 DEBUG nova.virt.libvirt.guest [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c6:5e:db"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc1f5466b-7c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 23 21:08:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:55.795 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:5e:db 10.100.0.25'], port_security=['fa:16:3e:c6:5e:db 10.100.0.25'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.25/28', 'neutron:device_id': '451aa9f7-4cd0-413e-beed-8a30a8685ff1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c71c794f-3bb9-41ea-bd53-fb4d0511d891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cfd1f7f1-25d4-42fe-ac59-ece898bff9bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a70406db-79d7-4319-98a3-b89293d6f5cb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=c1f5466b-7cb0-4db1-aacf-c88bf808a51a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:08:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:55.796 142158 INFO neutron.agent.ovn.metadata.agent [-] Port c1f5466b-7cb0-4db1-aacf-c88bf808a51a in datapath c71c794f-3bb9-41ea-bd53-fb4d0511d891 unbound from our chassis
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.796 230187 DEBUG nova.virt.libvirt.guest [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:c6:5e:db"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc1f5466b-7c"/></interface>not found in domain: <domain type='kvm' id='2'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <name>instance-00000003</name>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <uuid>451aa9f7-4cd0-413e-beed-8a30a8685ff1</uuid>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <metadata>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <nova:name>tempest-TestNetworkBasicOps-server-163368612</nova:name>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <nova:creationTime>2025-11-23 21:08:54</nova:creationTime>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <nova:flavor name="m1.nano">
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:memory>128</nova:memory>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:disk>1</nova:disk>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:swap>0</nova:swap>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:ephemeral>0</nova:ephemeral>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:vcpus>1</nova:vcpus>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </nova:flavor>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <nova:owner>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </nova:owner>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <nova:ports>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:port uuid="932faebb-b274-4e17-94a9-9339a27c275f">
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </nova:port>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:port uuid="c1f5466b-7cb0-4db1-aacf-c88bf808a51a">
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <nova:ip type="fixed" address="10.100.0.25" ipVersion="4"/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </nova:port>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </nova:ports>
Nov 23 21:08:55 compute-1 nova_compute[230183]: </nova:instance>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </metadata>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <memory unit='KiB'>131072</memory>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <vcpu placement='static'>1</vcpu>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <resource>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <partition>/machine</partition>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </resource>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <sysinfo type='smbios'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <system>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <entry name='manufacturer'>RDO</entry>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <entry name='product'>OpenStack Compute</entry>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <entry name='serial'>451aa9f7-4cd0-413e-beed-8a30a8685ff1</entry>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <entry name='uuid'>451aa9f7-4cd0-413e-beed-8a30a8685ff1</entry>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <entry name='family'>Virtual Machine</entry>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </system>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </sysinfo>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <os>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <boot dev='hd'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <smbios mode='sysinfo'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </os>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <features>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <acpi/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <apic/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <vmcoreinfo state='on'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </features>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <cpu mode='custom' match='exact' check='full'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <vendor>AMD</vendor>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='x2apic'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='tsc-deadline'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='hypervisor'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='tsc_adjust'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='spec-ctrl'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='stibp'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='ssbd'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='cmp_legacy'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='overflow-recov'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='succor'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='ibrs'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='amd-ssbd'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='virt-ssbd'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='disable' name='lbrv'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='disable' name='tsc-scale'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='disable' name='vmcb-clean'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='disable' name='flushbyasid'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='disable' name='pause-filter'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='disable' name='pfthreshold'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='disable' name='xsaves'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='disable' name='svm'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='require' name='topoext'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='disable' name='npt'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <feature policy='disable' name='nrip-save'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </cpu>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <clock offset='utc'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <timer name='pit' tickpolicy='delay'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <timer name='hpet' present='no'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </clock>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <on_poweroff>destroy</on_poweroff>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <on_reboot>restart</on_reboot>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <on_crash>destroy</on_crash>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <devices>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <disk type='network' device='disk'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <driver name='qemu' type='raw' cache='none'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <auth username='openstack'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:         <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <source protocol='rbd' name='vms/451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk' index='2'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:         <host name='192.168.122.100' port='6789'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:         <host name='192.168.122.102' port='6789'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:         <host name='192.168.122.101' port='6789'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       </source>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target dev='vda' bus='virtio'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='virtio-disk0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <disk type='network' device='cdrom'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <driver name='qemu' type='raw' cache='none'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <auth username='openstack'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:         <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <source protocol='rbd' name='vms/451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk.config' index='1'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:         <host name='192.168.122.100' port='6789'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:         <host name='192.168.122.102' port='6789'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:         <host name='192.168.122.101' port='6789'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       </source>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target dev='sda' bus='sata'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <readonly/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='sata0-0-0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='0' model='pcie-root'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pcie.0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='1' port='0x10'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.1'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='2' port='0x11'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.2'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='3' port='0x12'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.3'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='4' port='0x13'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.4'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='5' port='0x14'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.5'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='6' port='0x15'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.6'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='7' port='0x16'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.7'/>
Nov 23 21:08:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:55.797 142158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c71c794f-3bb9-41ea-bd53-fb4d0511d891, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='8' port='0x17'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.8'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='9' port='0x18'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.9'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='10' port='0x19'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.10'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='11' port='0x1a'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.11'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='12' port='0x1b'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.12'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='13' port='0x1c'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.13'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='14' port='0x1d'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.14'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='15' port='0x1e'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.15'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='16' port='0x1f'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.16'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='17' port='0x20'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.17'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='18' port='0x21'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.18'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='19' port='0x22'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.19'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='20' port='0x23'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.20'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='21' port='0x24'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.21'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='22' port='0x25'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.22'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='23' port='0x26'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.23'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='24' port='0x27'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.24'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target chassis='25' port='0x28'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.25'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model name='pcie-pci-bridge'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='pci.26'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='usb'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <controller type='sata' index='0'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='ide'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <interface type='ethernet'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <mac address='fa:16:3e:22:80:b0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target dev='tap932faebb-b2'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model type='virtio'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <driver name='vhost' rx_queue_size='512'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <mtu size='1442'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='net0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </interface>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <serial type='pty'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <source path='/dev/pts/0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <log file='/var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/console.log' append='off'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target type='isa-serial' port='0'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:         <model name='isa-serial'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       </target>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='serial0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </serial>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <console type='pty' tty='/dev/pts/0'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <source path='/dev/pts/0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <log file='/var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/console.log' append='off'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <target type='serial' port='0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='serial0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </console>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <input type='tablet' bus='usb'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='input0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='usb' bus='0' port='1'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </input>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <input type='mouse' bus='ps2'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='input1'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </input>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <input type='keyboard' bus='ps2'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='input2'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </input>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <listen type='address' address='::0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </graphics>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <audio id='1' type='none'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <video>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <model type='virtio' heads='1' primary='yes'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='video0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </video>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <watchdog model='itco' action='reset'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='watchdog0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </watchdog>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <memballoon model='virtio'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <stats period='10'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='balloon0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </memballoon>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <rng model='virtio'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <backend model='random'>/dev/urandom</backend>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <alias name='rng0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </rng>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </devices>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <label>system_u:system_r:svirt_t:s0:c591,c609</label>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c591,c609</imagelabel>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </seclabel>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <label>+107:+107</label>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <imagelabel>+107:+107</imagelabel>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </seclabel>
Nov 23 21:08:55 compute-1 nova_compute[230183]: </domain>
Nov 23 21:08:55 compute-1 nova_compute[230183]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.797 230187 INFO nova.virt.libvirt.driver [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully detached device tapc1f5466b-7c from instance 451aa9f7-4cd0-413e-beed-8a30a8685ff1 from the live domain config.
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.798 230187 DEBUG nova.virt.libvirt.vif [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:08:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-163368612',display_name='tempest-TestNetworkBasicOps-server-163368612',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-163368612',id=3,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO6ZIj438fQUpVfUUeh9lapkxwknyZNU4rtkhiTUYmBPGvkJZXNdDf4srslhWKNNtoBf1C2D4cd/jBUBjs52xRw75wPIQzFCZ8VrPBNO0yEc0UePTukzbeBIVnoSLQIebA==',key_name='tempest-TestNetworkBasicOps-1883192829',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:08:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-ptm322on',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:08:22Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=451aa9f7-4cd0-413e-beed-8a30a8685ff1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.798 230187 DEBUG nova.network.os_vif_util [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.799 230187 DEBUG nova.network.os_vif_util [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:5e:db,bridge_name='br-int',has_traffic_filtering=True,id=c1f5466b-7cb0-4db1-aacf-c88bf808a51a,network=Network(c71c794f-3bb9-41ea-bd53-fb4d0511d891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1f5466b-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:08:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:55.799 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[e9454103-e961-4851-adc4-bef1e4809888]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:55.800 142158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891 namespace which is not needed anymore
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.800 230187 DEBUG os_vif [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:5e:db,bridge_name='br-int',has_traffic_filtering=True,id=c1f5466b-7cb0-4db1-aacf-c88bf808a51a,network=Network(c71c794f-3bb9-41ea-bd53-fb4d0511d891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1f5466b-7c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.802 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.802 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1f5466b-7c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.804 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.806 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.809 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.811 230187 INFO os_vif [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:5e:db,bridge_name='br-int',has_traffic_filtering=True,id=c1f5466b-7cb0-4db1-aacf-c88bf808a51a,network=Network(c71c794f-3bb9-41ea-bd53-fb4d0511d891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1f5466b-7c')
Nov 23 21:08:55 compute-1 nova_compute[230183]: 2025-11-23 21:08:55.812 230187 DEBUG nova.virt.libvirt.guest [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <nova:name>tempest-TestNetworkBasicOps-server-163368612</nova:name>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <nova:creationTime>2025-11-23 21:08:55</nova:creationTime>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <nova:flavor name="m1.nano">
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:memory>128</nova:memory>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:disk>1</nova:disk>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:swap>0</nova:swap>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:ephemeral>0</nova:ephemeral>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:vcpus>1</nova:vcpus>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </nova:flavor>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <nova:owner>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </nova:owner>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   <nova:ports>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     <nova:port uuid="932faebb-b274-4e17-94a9-9339a27c275f">
Nov 23 21:08:55 compute-1 nova_compute[230183]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 23 21:08:55 compute-1 nova_compute[230183]:     </nova:port>
Nov 23 21:08:55 compute-1 nova_compute[230183]:   </nova:ports>
Nov 23 21:08:55 compute-1 nova_compute[230183]: </nova:instance>
Nov 23 21:08:55 compute-1 nova_compute[230183]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 23 21:08:55 compute-1 neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891[235534]: [NOTICE]   (235538) : haproxy version is 2.8.14-c23fe91
Nov 23 21:08:55 compute-1 neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891[235534]: [NOTICE]   (235538) : path to executable is /usr/sbin/haproxy
Nov 23 21:08:55 compute-1 neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891[235534]: [WARNING]  (235538) : Exiting Master process...
Nov 23 21:08:55 compute-1 neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891[235534]: [WARNING]  (235538) : Exiting Master process...
Nov 23 21:08:55 compute-1 neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891[235534]: [ALERT]    (235538) : Current worker (235540) exited with code 143 (Terminated)
Nov 23 21:08:55 compute-1 neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891[235534]: [WARNING]  (235538) : All workers exited. Exiting... (0)
Nov 23 21:08:55 compute-1 systemd[1]: libpod-dea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507.scope: Deactivated successfully.
Nov 23 21:08:55 compute-1 podman[235572]: 2025-11-23 21:08:55.962541183 +0000 UTC m=+0.050839230 container died dea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 23 21:08:55 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507-userdata-shm.mount: Deactivated successfully.
Nov 23 21:08:55 compute-1 systemd[1]: var-lib-containers-storage-overlay-09efd6aef4de0b9fee32aa5a46b9f13dd619c92f7667aa4d49a98b02a6bce3c0-merged.mount: Deactivated successfully.
Nov 23 21:08:56 compute-1 podman[235572]: 2025-11-23 21:08:56.001243136 +0000 UTC m=+0.089541183 container cleanup dea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 23 21:08:56 compute-1 systemd[1]: libpod-conmon-dea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507.scope: Deactivated successfully.
Nov 23 21:08:56 compute-1 podman[235601]: 2025-11-23 21:08:56.062928496 +0000 UTC m=+0.040223156 container remove dea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 21:08:56 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:56.068 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[f805d049-cf88-4036-9c04-b334cf0b713a]: (4, ('Sun Nov 23 09:08:55 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891 (dea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507)\ndea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507\nSun Nov 23 09:08:56 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891 (dea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507)\ndea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:56 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:56.069 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[a31dba2d-7561-433b-a307-d9ce2334e551]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:56 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:56.070 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc71c794f-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.073 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:56 compute-1 kernel: tapc71c794f-30: left promiscuous mode
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.086 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:56 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:56.088 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6c1756-377c-4b54-9dc8-56f5a1380d93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:56 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:56.102 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6a3648-ce81-4b46-acfa-7121191e366b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:56 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:56.103 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[fb0ef576-0fc5-42ca-b894-416fbfb72bdb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:56 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:56.116 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[e13bf8a9-2af1-456f-b86e-6675d594aa54]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 408276, 'reachable_time': 33556, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235617, 'error': None, 'target': 'ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:56 compute-1 systemd[1]: run-netns-ovnmeta\x2dc71c794f\x2d3bb9\x2d41ea\x2dbd53\x2dfb4d0511d891.mount: Deactivated successfully.
Nov 23 21:08:56 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:56.119 142272 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 23 21:08:56 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:56.119 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[3e28f93e-1097-4733-ad86-c7d37cfc9a09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:08:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:56.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.363 230187 DEBUG oslo_concurrency.lockutils [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.364 230187 DEBUG oslo_concurrency.lockutils [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquired lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.364 230187 DEBUG nova.network.neutron [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.428 230187 DEBUG nova.compute.manager [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-vif-deleted-c1f5466b-7cb0-4db1-aacf-c88bf808a51a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.429 230187 INFO nova.compute.manager [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Neutron deleted interface c1f5466b-7cb0-4db1-aacf-c88bf808a51a; detaching it from the instance and deleting it from the info cache
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.429 230187 DEBUG nova.network.neutron [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Updating instance_info_cache with network_info: [{"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:08:56 compute-1 ceph-mon[80135]: pgmap v825: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 17 KiB/s wr, 1 op/s
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.461 230187 DEBUG nova.objects.instance [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lazy-loading 'system_metadata' on Instance uuid 451aa9f7-4cd0-413e-beed-8a30a8685ff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.501 230187 DEBUG nova.objects.instance [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lazy-loading 'flavor' on Instance uuid 451aa9f7-4cd0-413e-beed-8a30a8685ff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:08:56 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.528 230187 DEBUG nova.compute.manager [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-vif-plugged-c1f5466b-7cb0-4db1-aacf-c88bf808a51a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.529 230187 DEBUG oslo_concurrency.lockutils [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.530 230187 DEBUG oslo_concurrency.lockutils [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.530 230187 DEBUG oslo_concurrency.lockutils [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.531 230187 DEBUG nova.compute.manager [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] No waiting events found dispatching network-vif-plugged-c1f5466b-7cb0-4db1-aacf-c88bf808a51a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.531 230187 WARNING nova.compute.manager [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received unexpected event network-vif-plugged-c1f5466b-7cb0-4db1-aacf-c88bf808a51a for instance with vm_state active and task_state None.
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.532 230187 DEBUG nova.compute.manager [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-vif-unplugged-c1f5466b-7cb0-4db1-aacf-c88bf808a51a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.533 230187 DEBUG oslo_concurrency.lockutils [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.533 230187 DEBUG oslo_concurrency.lockutils [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.534 230187 DEBUG oslo_concurrency.lockutils [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.534 230187 DEBUG nova.compute.manager [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] No waiting events found dispatching network-vif-unplugged-c1f5466b-7cb0-4db1-aacf-c88bf808a51a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.535 230187 WARNING nova.compute.manager [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received unexpected event network-vif-unplugged-c1f5466b-7cb0-4db1-aacf-c88bf808a51a for instance with vm_state active and task_state None.
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.535 230187 DEBUG nova.compute.manager [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-vif-plugged-c1f5466b-7cb0-4db1-aacf-c88bf808a51a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.536 230187 DEBUG oslo_concurrency.lockutils [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.536 230187 DEBUG oslo_concurrency.lockutils [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.537 230187 DEBUG oslo_concurrency.lockutils [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.537 230187 DEBUG nova.compute.manager [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] No waiting events found dispatching network-vif-plugged-c1f5466b-7cb0-4db1-aacf-c88bf808a51a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.537 230187 WARNING nova.compute.manager [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received unexpected event network-vif-plugged-c1f5466b-7cb0-4db1-aacf-c88bf808a51a for instance with vm_state active and task_state None.
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.540 230187 DEBUG nova.virt.libvirt.vif [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:08:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-163368612',display_name='tempest-TestNetworkBasicOps-server-163368612',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-163368612',id=3,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO6ZIj438fQUpVfUUeh9lapkxwknyZNU4rtkhiTUYmBPGvkJZXNdDf4srslhWKNNtoBf1C2D4cd/jBUBjs52xRw75wPIQzFCZ8VrPBNO0yEc0UePTukzbeBIVnoSLQIebA==',key_name='tempest-TestNetworkBasicOps-1883192829',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:08:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-ptm322on',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:08:22Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=451aa9f7-4cd0-413e-beed-8a30a8685ff1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.541 230187 DEBUG nova.network.os_vif_util [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Converting VIF {"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.542 230187 DEBUG nova.network.os_vif_util [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:5e:db,bridge_name='br-int',has_traffic_filtering=True,id=c1f5466b-7cb0-4db1-aacf-c88bf808a51a,network=Network(c71c794f-3bb9-41ea-bd53-fb4d0511d891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1f5466b-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.545 230187 DEBUG nova.virt.libvirt.guest [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c6:5e:db"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc1f5466b-7c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.551 230187 DEBUG nova.virt.libvirt.guest [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:c6:5e:db"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc1f5466b-7c"/></interface>not found in domain: <domain type='kvm' id='2'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <name>instance-00000003</name>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <uuid>451aa9f7-4cd0-413e-beed-8a30a8685ff1</uuid>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <metadata>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <nova:name>tempest-TestNetworkBasicOps-server-163368612</nova:name>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <nova:creationTime>2025-11-23 21:08:55</nova:creationTime>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <nova:flavor name="m1.nano">
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <nova:memory>128</nova:memory>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <nova:disk>1</nova:disk>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <nova:swap>0</nova:swap>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <nova:ephemeral>0</nova:ephemeral>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <nova:vcpus>1</nova:vcpus>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </nova:flavor>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <nova:owner>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </nova:owner>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <nova:ports>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <nova:port uuid="932faebb-b274-4e17-94a9-9339a27c275f">
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </nova:port>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </nova:ports>
Nov 23 21:08:56 compute-1 nova_compute[230183]: </nova:instance>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </metadata>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <memory unit='KiB'>131072</memory>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <vcpu placement='static'>1</vcpu>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <resource>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <partition>/machine</partition>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </resource>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <sysinfo type='smbios'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <system>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <entry name='manufacturer'>RDO</entry>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <entry name='product'>OpenStack Compute</entry>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <entry name='serial'>451aa9f7-4cd0-413e-beed-8a30a8685ff1</entry>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <entry name='uuid'>451aa9f7-4cd0-413e-beed-8a30a8685ff1</entry>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <entry name='family'>Virtual Machine</entry>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </system>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </sysinfo>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <os>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <boot dev='hd'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <smbios mode='sysinfo'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </os>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <features>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <acpi/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <apic/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <vmcoreinfo state='on'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </features>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <cpu mode='custom' match='exact' check='full'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <vendor>AMD</vendor>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='x2apic'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='tsc-deadline'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='hypervisor'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='tsc_adjust'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='spec-ctrl'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='stibp'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='ssbd'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='cmp_legacy'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='overflow-recov'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='succor'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='ibrs'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='amd-ssbd'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='virt-ssbd'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='disable' name='lbrv'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='disable' name='tsc-scale'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='disable' name='vmcb-clean'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='disable' name='flushbyasid'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='disable' name='pause-filter'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='disable' name='pfthreshold'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='disable' name='xsaves'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='disable' name='svm'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='topoext'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='disable' name='npt'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='disable' name='nrip-save'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </cpu>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <clock offset='utc'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <timer name='pit' tickpolicy='delay'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <timer name='hpet' present='no'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </clock>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <on_poweroff>destroy</on_poweroff>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <on_reboot>restart</on_reboot>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <on_crash>destroy</on_crash>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <devices>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <disk type='network' device='disk'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <driver name='qemu' type='raw' cache='none'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <auth username='openstack'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:         <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <source protocol='rbd' name='vms/451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk' index='2'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:         <host name='192.168.122.100' port='6789'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:         <host name='192.168.122.102' port='6789'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:         <host name='192.168.122.101' port='6789'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       </source>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target dev='vda' bus='virtio'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='virtio-disk0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <disk type='network' device='cdrom'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <driver name='qemu' type='raw' cache='none'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <auth username='openstack'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:         <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <source protocol='rbd' name='vms/451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk.config' index='1'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:         <host name='192.168.122.100' port='6789'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:         <host name='192.168.122.102' port='6789'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:         <host name='192.168.122.101' port='6789'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       </source>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target dev='sda' bus='sata'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <readonly/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='sata0-0-0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='0' model='pcie-root'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pcie.0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='1' port='0x10'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.1'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='2' port='0x11'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.2'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='3' port='0x12'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.3'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='4' port='0x13'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.4'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='5' port='0x14'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.5'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='6' port='0x15'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.6'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='7' port='0x16'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.7'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='8' port='0x17'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.8'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='9' port='0x18'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.9'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='10' port='0x19'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.10'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='11' port='0x1a'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.11'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='12' port='0x1b'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.12'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='13' port='0x1c'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.13'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='14' port='0x1d'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.14'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='15' port='0x1e'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.15'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='16' port='0x1f'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.16'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='17' port='0x20'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.17'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='18' port='0x21'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.18'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='19' port='0x22'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.19'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='20' port='0x23'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.20'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='21' port='0x24'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.21'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='22' port='0x25'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.22'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='23' port='0x26'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.23'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='24' port='0x27'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.24'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='25' port='0x28'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.25'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-pci-bridge'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.26'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='usb'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='sata' index='0'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='ide'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <interface type='ethernet'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <mac address='fa:16:3e:22:80:b0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target dev='tap932faebb-b2'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model type='virtio'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <driver name='vhost' rx_queue_size='512'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <mtu size='1442'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='net0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </interface>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <serial type='pty'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <source path='/dev/pts/0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <log file='/var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/console.log' append='off'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target type='isa-serial' port='0'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:         <model name='isa-serial'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       </target>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='serial0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </serial>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <console type='pty' tty='/dev/pts/0'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <source path='/dev/pts/0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <log file='/var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/console.log' append='off'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target type='serial' port='0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='serial0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </console>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <input type='tablet' bus='usb'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='input0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='usb' bus='0' port='1'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </input>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <input type='mouse' bus='ps2'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='input1'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </input>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <input type='keyboard' bus='ps2'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='input2'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </input>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <listen type='address' address='::0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </graphics>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <audio id='1' type='none'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <video>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model type='virtio' heads='1' primary='yes'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='video0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </video>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <watchdog model='itco' action='reset'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='watchdog0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </watchdog>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <memballoon model='virtio'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <stats period='10'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='balloon0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </memballoon>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <rng model='virtio'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <backend model='random'>/dev/urandom</backend>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='rng0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </rng>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </devices>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <label>system_u:system_r:svirt_t:s0:c591,c609</label>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c591,c609</imagelabel>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </seclabel>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <label>+107:+107</label>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <imagelabel>+107:+107</imagelabel>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </seclabel>
Nov 23 21:08:56 compute-1 nova_compute[230183]: </domain>
Nov 23 21:08:56 compute-1 nova_compute[230183]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.553 230187 DEBUG nova.virt.libvirt.guest [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c6:5e:db"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc1f5466b-7c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.556 230187 DEBUG nova.virt.libvirt.guest [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:c6:5e:db"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc1f5466b-7c"/></interface>not found in domain: <domain type='kvm' id='2'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <name>instance-00000003</name>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <uuid>451aa9f7-4cd0-413e-beed-8a30a8685ff1</uuid>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <metadata>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <nova:name>tempest-TestNetworkBasicOps-server-163368612</nova:name>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <nova:creationTime>2025-11-23 21:08:55</nova:creationTime>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <nova:flavor name="m1.nano">
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <nova:memory>128</nova:memory>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <nova:disk>1</nova:disk>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <nova:swap>0</nova:swap>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <nova:ephemeral>0</nova:ephemeral>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <nova:vcpus>1</nova:vcpus>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </nova:flavor>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <nova:owner>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </nova:owner>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <nova:ports>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <nova:port uuid="932faebb-b274-4e17-94a9-9339a27c275f">
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </nova:port>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </nova:ports>
Nov 23 21:08:56 compute-1 nova_compute[230183]: </nova:instance>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </metadata>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <memory unit='KiB'>131072</memory>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <vcpu placement='static'>1</vcpu>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <resource>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <partition>/machine</partition>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </resource>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <sysinfo type='smbios'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <system>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <entry name='manufacturer'>RDO</entry>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <entry name='product'>OpenStack Compute</entry>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <entry name='serial'>451aa9f7-4cd0-413e-beed-8a30a8685ff1</entry>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <entry name='uuid'>451aa9f7-4cd0-413e-beed-8a30a8685ff1</entry>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <entry name='family'>Virtual Machine</entry>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </system>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </sysinfo>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <os>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <boot dev='hd'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <smbios mode='sysinfo'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </os>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <features>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <acpi/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <apic/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <vmcoreinfo state='on'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </features>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <cpu mode='custom' match='exact' check='full'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <vendor>AMD</vendor>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='x2apic'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='tsc-deadline'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='hypervisor'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='tsc_adjust'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='spec-ctrl'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='stibp'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='ssbd'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='cmp_legacy'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='overflow-recov'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='succor'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='ibrs'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='amd-ssbd'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='virt-ssbd'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='disable' name='lbrv'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='disable' name='tsc-scale'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='disable' name='vmcb-clean'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='disable' name='flushbyasid'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='disable' name='pause-filter'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='disable' name='pfthreshold'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='disable' name='xsaves'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='disable' name='svm'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='require' name='topoext'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='disable' name='npt'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <feature policy='disable' name='nrip-save'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </cpu>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <clock offset='utc'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <timer name='pit' tickpolicy='delay'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <timer name='hpet' present='no'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </clock>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <on_poweroff>destroy</on_poweroff>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <on_reboot>restart</on_reboot>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <on_crash>destroy</on_crash>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <devices>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <disk type='network' device='disk'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <driver name='qemu' type='raw' cache='none'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <auth username='openstack'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:         <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <source protocol='rbd' name='vms/451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk' index='2'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:         <host name='192.168.122.100' port='6789'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:         <host name='192.168.122.102' port='6789'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:         <host name='192.168.122.101' port='6789'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       </source>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target dev='vda' bus='virtio'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='virtio-disk0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <disk type='network' device='cdrom'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <driver name='qemu' type='raw' cache='none'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <auth username='openstack'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:         <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <source protocol='rbd' name='vms/451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk.config' index='1'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:         <host name='192.168.122.100' port='6789'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:         <host name='192.168.122.102' port='6789'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:         <host name='192.168.122.101' port='6789'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       </source>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target dev='sda' bus='sata'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <readonly/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='sata0-0-0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='0' model='pcie-root'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pcie.0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='1' port='0x10'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.1'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='2' port='0x11'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.2'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='3' port='0x12'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.3'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='4' port='0x13'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.4'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='5' port='0x14'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.5'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='6' port='0x15'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.6'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='7' port='0x16'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.7'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='8' port='0x17'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.8'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='9' port='0x18'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.9'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='10' port='0x19'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.10'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='11' port='0x1a'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.11'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='12' port='0x1b'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.12'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='13' port='0x1c'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.13'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='14' port='0x1d'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.14'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='15' port='0x1e'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.15'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='16' port='0x1f'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.16'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='17' port='0x20'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.17'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='18' port='0x21'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.18'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='19' port='0x22'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.19'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='20' port='0x23'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.20'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='21' port='0x24'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.21'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='22' port='0x25'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.22'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='23' port='0x26'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.23'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='24' port='0x27'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.24'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target chassis='25' port='0x28'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.25'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model name='pcie-pci-bridge'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='pci.26'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='usb'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <controller type='sata' index='0'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='ide'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <interface type='ethernet'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <mac address='fa:16:3e:22:80:b0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target dev='tap932faebb-b2'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model type='virtio'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <driver name='vhost' rx_queue_size='512'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <mtu size='1442'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='net0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </interface>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <serial type='pty'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <source path='/dev/pts/0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <log file='/var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/console.log' append='off'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target type='isa-serial' port='0'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:         <model name='isa-serial'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       </target>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='serial0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </serial>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <console type='pty' tty='/dev/pts/0'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <source path='/dev/pts/0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <log file='/var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/console.log' append='off'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <target type='serial' port='0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='serial0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </console>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <input type='tablet' bus='usb'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='input0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='usb' bus='0' port='1'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </input>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <input type='mouse' bus='ps2'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='input1'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </input>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <input type='keyboard' bus='ps2'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='input2'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </input>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <listen type='address' address='::0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </graphics>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <audio id='1' type='none'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <video>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <model type='virtio' heads='1' primary='yes'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='video0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </video>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <watchdog model='itco' action='reset'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='watchdog0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </watchdog>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <memballoon model='virtio'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <stats period='10'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='balloon0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </memballoon>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <rng model='virtio'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <backend model='random'>/dev/urandom</backend>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <alias name='rng0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </rng>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </devices>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <label>system_u:system_r:svirt_t:s0:c591,c609</label>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c591,c609</imagelabel>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </seclabel>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <label>+107:+107</label>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <imagelabel>+107:+107</imagelabel>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </seclabel>
Nov 23 21:08:56 compute-1 nova_compute[230183]: </domain>
Nov 23 21:08:56 compute-1 nova_compute[230183]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.556 230187 WARNING nova.virt.libvirt.driver [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Detaching interface fa:16:3e:c6:5e:db failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapc1f5466b-7c' not found.
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.557 230187 DEBUG nova.virt.libvirt.vif [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:08:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-163368612',display_name='tempest-TestNetworkBasicOps-server-163368612',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-163368612',id=3,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO6ZIj438fQUpVfUUeh9lapkxwknyZNU4rtkhiTUYmBPGvkJZXNdDf4srslhWKNNtoBf1C2D4cd/jBUBjs52xRw75wPIQzFCZ8VrPBNO0yEc0UePTukzbeBIVnoSLQIebA==',key_name='tempest-TestNetworkBasicOps-1883192829',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:08:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-ptm322on',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:08:22Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=451aa9f7-4cd0-413e-beed-8a30a8685ff1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.558 230187 DEBUG nova.network.os_vif_util [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Converting VIF {"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.558 230187 DEBUG nova.network.os_vif_util [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:5e:db,bridge_name='br-int',has_traffic_filtering=True,id=c1f5466b-7cb0-4db1-aacf-c88bf808a51a,network=Network(c71c794f-3bb9-41ea-bd53-fb4d0511d891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1f5466b-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.559 230187 DEBUG os_vif [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:5e:db,bridge_name='br-int',has_traffic_filtering=True,id=c1f5466b-7cb0-4db1-aacf-c88bf808a51a,network=Network(c71c794f-3bb9-41ea-bd53-fb4d0511d891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1f5466b-7c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.563 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.563 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1f5466b-7c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.564 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.566 230187 INFO os_vif [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:5e:db,bridge_name='br-int',has_traffic_filtering=True,id=c1f5466b-7cb0-4db1-aacf-c88bf808a51a,network=Network(c71c794f-3bb9-41ea-bd53-fb4d0511d891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1f5466b-7c')
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.567 230187 DEBUG nova.virt.libvirt.guest [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <nova:name>tempest-TestNetworkBasicOps-server-163368612</nova:name>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <nova:creationTime>2025-11-23 21:08:56</nova:creationTime>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <nova:flavor name="m1.nano">
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <nova:memory>128</nova:memory>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <nova:disk>1</nova:disk>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <nova:swap>0</nova:swap>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <nova:ephemeral>0</nova:ephemeral>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <nova:vcpus>1</nova:vcpus>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </nova:flavor>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <nova:owner>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </nova:owner>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   <nova:ports>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     <nova:port uuid="932faebb-b274-4e17-94a9-9339a27c275f">
Nov 23 21:08:56 compute-1 nova_compute[230183]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 23 21:08:56 compute-1 nova_compute[230183]:     </nova:port>
Nov 23 21:08:56 compute-1 nova_compute[230183]:   </nova:ports>
Nov 23 21:08:56 compute-1 nova_compute[230183]: </nova:instance>
Nov 23 21:08:56 compute-1 nova_compute[230183]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 23 21:08:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:56.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:56 compute-1 nova_compute[230183]: 2025-11-23 21:08:56.973 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:57 compute-1 nova_compute[230183]: 2025-11-23 21:08:57.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:08:57 compute-1 nova_compute[230183]: 2025-11-23 21:08:57.428 230187 INFO nova.network.neutron [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Port c1f5466b-7cb0-4db1-aacf-c88bf808a51a from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 23 21:08:57 compute-1 nova_compute[230183]: 2025-11-23 21:08:57.429 230187 DEBUG nova.network.neutron [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Updating instance_info_cache with network_info: [{"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:08:57 compute-1 nova_compute[230183]: 2025-11-23 21:08:57.446 230187 DEBUG oslo_concurrency.lockutils [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Releasing lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:08:57 compute-1 nova_compute[230183]: 2025-11-23 21:08:57.461 230187 DEBUG oslo_concurrency.lockutils [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "interface-451aa9f7-4cd0-413e-beed-8a30a8685ff1-c1f5466b-7cb0-4db1-aacf-c88bf808a51a" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 1.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:08:57 compute-1 ovn_controller[132845]: 2025-11-23T21:08:57Z|00060|binding|INFO|Releasing lport 54600d4f-e167-4eaf-830f-ddc1c402909e from this chassis (sb_readonly=0)
Nov 23 21:08:57 compute-1 nova_compute[230183]: 2025-11-23 21:08:57.592 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:58.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:58 compute-1 ceph-mon[80135]: pgmap v826: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 5.7 KiB/s wr, 1 op/s
Nov 23 21:08:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:08:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:08:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:58.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.289 230187 DEBUG nova.compute.manager [req-d53871ab-0dae-4a98-9f78-10bd66941538 req-672799ac-b621-4040-a298-a45dec597189 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-changed-932faebb-b274-4e17-94a9-9339a27c275f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.289 230187 DEBUG nova.compute.manager [req-d53871ab-0dae-4a98-9f78-10bd66941538 req-672799ac-b621-4040-a298-a45dec597189 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Refreshing instance network info cache due to event network-changed-932faebb-b274-4e17-94a9-9339a27c275f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.290 230187 DEBUG oslo_concurrency.lockutils [req-d53871ab-0dae-4a98-9f78-10bd66941538 req-672799ac-b621-4040-a298-a45dec597189 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.290 230187 DEBUG oslo_concurrency.lockutils [req-d53871ab-0dae-4a98-9f78-10bd66941538 req-672799ac-b621-4040-a298-a45dec597189 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.290 230187 DEBUG nova.network.neutron [req-d53871ab-0dae-4a98-9f78-10bd66941538 req-672799ac-b621-4040-a298-a45dec597189 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Refreshing network info cache for port 932faebb-b274-4e17-94a9-9339a27c275f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.365 230187 DEBUG oslo_concurrency.lockutils [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.365 230187 DEBUG oslo_concurrency.lockutils [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.365 230187 DEBUG oslo_concurrency.lockutils [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.365 230187 DEBUG oslo_concurrency.lockutils [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.365 230187 DEBUG oslo_concurrency.lockutils [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.366 230187 INFO nova.compute.manager [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Terminating instance
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.367 230187 DEBUG nova.compute.manager [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 23 21:08:59 compute-1 kernel: tap932faebb-b2 (unregistering): left promiscuous mode
Nov 23 21:08:59 compute-1 NetworkManager[49021]: <info>  [1763932139.4196] device (tap932faebb-b2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.423 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.427 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:59 compute-1 ovn_controller[132845]: 2025-11-23T21:08:59Z|00061|binding|INFO|Releasing lport 932faebb-b274-4e17-94a9-9339a27c275f from this chassis (sb_readonly=0)
Nov 23 21:08:59 compute-1 ovn_controller[132845]: 2025-11-23T21:08:59Z|00062|binding|INFO|Setting lport 932faebb-b274-4e17-94a9-9339a27c275f down in Southbound
Nov 23 21:08:59 compute-1 ovn_controller[132845]: 2025-11-23T21:08:59Z|00063|binding|INFO|Removing iface tap932faebb-b2 ovn-installed in OVS
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.429 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:59 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.436 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:80:b0 10.100.0.5'], port_security=['fa:16:3e:22:80:b0 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '451aa9f7-4cd0-413e-beed-8a30a8685ff1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cfca448-ff51-45d5-9a96-e7d306414608', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b3669a8c-2edc-4975-aec5-618de39b846f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab9ca556-3834-43fe-9280-f86716cb1ac8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=932faebb-b274-4e17-94a9-9339a27c275f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:08:59 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.437 142158 INFO neutron.agent.ovn.metadata.agent [-] Port 932faebb-b274-4e17-94a9-9339a27c275f in datapath 0cfca448-ff51-45d5-9a96-e7d306414608 unbound from our chassis
Nov 23 21:08:59 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.438 142158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0cfca448-ff51-45d5-9a96-e7d306414608, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 23 21:08:59 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.438 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[b47a4487-d14b-4baf-ae7d-6cd32624508e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:59 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.439 142158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608 namespace which is not needed anymore
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.447 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:59 compute-1 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Deactivated successfully.
Nov 23 21:08:59 compute-1 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Consumed 14.764s CPU time.
Nov 23 21:08:59 compute-1 systemd-machined[193469]: Machine qemu-2-instance-00000003 terminated.
Nov 23 21:08:59 compute-1 neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608[235192]: [NOTICE]   (235200) : haproxy version is 2.8.14-c23fe91
Nov 23 21:08:59 compute-1 neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608[235192]: [NOTICE]   (235200) : path to executable is /usr/sbin/haproxy
Nov 23 21:08:59 compute-1 neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608[235192]: [WARNING]  (235200) : Exiting Master process...
Nov 23 21:08:59 compute-1 neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608[235192]: [ALERT]    (235200) : Current worker (235202) exited with code 143 (Terminated)
Nov 23 21:08:59 compute-1 neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608[235192]: [WARNING]  (235200) : All workers exited. Exiting... (0)
Nov 23 21:08:59 compute-1 systemd[1]: libpod-4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa.scope: Deactivated successfully.
Nov 23 21:08:59 compute-1 podman[235643]: 2025-11-23 21:08:59.560847145 +0000 UTC m=+0.042098148 container died 4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 21:08:59 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa-userdata-shm.mount: Deactivated successfully.
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.594 230187 INFO nova.virt.libvirt.driver [-] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Instance destroyed successfully.
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.594 230187 DEBUG nova.objects.instance [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'resources' on Instance uuid 451aa9f7-4cd0-413e-beed-8a30a8685ff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:08:59 compute-1 systemd[1]: var-lib-containers-storage-overlay-8e59bb26b82ad07b4bc95bd3eabbfae128162a27036a9012db8ac3aeadc048e2-merged.mount: Deactivated successfully.
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.605 230187 DEBUG nova.virt.libvirt.vif [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:08:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-163368612',display_name='tempest-TestNetworkBasicOps-server-163368612',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-163368612',id=3,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO6ZIj438fQUpVfUUeh9lapkxwknyZNU4rtkhiTUYmBPGvkJZXNdDf4srslhWKNNtoBf1C2D4cd/jBUBjs52xRw75wPIQzFCZ8VrPBNO0yEc0UePTukzbeBIVnoSLQIebA==',key_name='tempest-TestNetworkBasicOps-1883192829',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:08:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-ptm322on',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:08:22Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=451aa9f7-4cd0-413e-beed-8a30a8685ff1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.606 230187 DEBUG nova.network.os_vif_util [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.606 230187 DEBUG nova.network.os_vif_util [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:22:80:b0,bridge_name='br-int',has_traffic_filtering=True,id=932faebb-b274-4e17-94a9-9339a27c275f,network=Network(0cfca448-ff51-45d5-9a96-e7d306414608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap932faebb-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.607 230187 DEBUG os_vif [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:22:80:b0,bridge_name='br-int',has_traffic_filtering=True,id=932faebb-b274-4e17-94a9-9339a27c275f,network=Network(0cfca448-ff51-45d5-9a96-e7d306414608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap932faebb-b2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 23 21:08:59 compute-1 podman[235643]: 2025-11-23 21:08:59.609750211 +0000 UTC m=+0.091001214 container cleanup 4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.610 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.610 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap932faebb-b2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:08:59 compute-1 systemd[1]: libpod-conmon-4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa.scope: Deactivated successfully.
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.664 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.668 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.669 230187 INFO os_vif [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:22:80:b0,bridge_name='br-int',has_traffic_filtering=True,id=932faebb-b274-4e17-94a9-9339a27c275f,network=Network(0cfca448-ff51-45d5-9a96-e7d306414608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap932faebb-b2')
Nov 23 21:08:59 compute-1 podman[235683]: 2025-11-23 21:08:59.671163044 +0000 UTC m=+0.041697468 container remove 4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 23 21:08:59 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.676 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[247eca96-e605-4bea-a3b6-4eb268dba296]: (4, ('Sun Nov 23 09:08:59 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608 (4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa)\n4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa\nSun Nov 23 09:08:59 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608 (4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa)\n4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:59 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.677 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[2429a2bf-d1f1-4fe9-bd2d-b3252115b5bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:59 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.678 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cfca448-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:08:59 compute-1 kernel: tap0cfca448-f0: left promiscuous mode
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.690 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:59 compute-1 nova_compute[230183]: 2025-11-23 21:08:59.694 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:08:59 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.697 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[a77340d2-1f9a-4867-9195-f292dd777371]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:59 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.714 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[446b92a5-1371-4b58-8799-8f31f011b8ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:59 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.715 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[9d57ae47-2b5c-4b94-8a9d-7a9e0cd07a34]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:59 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.737 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[1eadfa44-e541-462e-bcb8-3198810983f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 405016, 'reachable_time': 33983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235714, 'error': None, 'target': 'ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:59 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.739 142272 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 23 21:08:59 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.739 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[0fdd37ed-a185-43cd-902d-9302709e483c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:08:59 compute-1 systemd[1]: run-netns-ovnmeta\x2d0cfca448\x2dff51\x2d45d5\x2d9a96\x2de7d306414608.mount: Deactivated successfully.
Nov 23 21:09:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:00.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:00 compute-1 nova_compute[230183]: 2025-11-23 21:09:00.296 230187 INFO nova.virt.libvirt.driver [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Deleting instance files /var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1_del
Nov 23 21:09:00 compute-1 nova_compute[230183]: 2025-11-23 21:09:00.297 230187 INFO nova.virt.libvirt.driver [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Deletion of /var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1_del complete
Nov 23 21:09:00 compute-1 nova_compute[230183]: 2025-11-23 21:09:00.344 230187 INFO nova.compute.manager [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Took 0.98 seconds to destroy the instance on the hypervisor.
Nov 23 21:09:00 compute-1 nova_compute[230183]: 2025-11-23 21:09:00.345 230187 DEBUG oslo.service.loopingcall [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 23 21:09:00 compute-1 nova_compute[230183]: 2025-11-23 21:09:00.346 230187 DEBUG nova.compute.manager [-] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 23 21:09:00 compute-1 nova_compute[230183]: 2025-11-23 21:09:00.346 230187 DEBUG nova.network.neutron [-] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 23 21:09:00 compute-1 nova_compute[230183]: 2025-11-23 21:09:00.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:09:00 compute-1 nova_compute[230183]: 2025-11-23 21:09:00.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:09:00 compute-1 nova_compute[230183]: 2025-11-23 21:09:00.428 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:09:00 compute-1 nova_compute[230183]: 2025-11-23 21:09:00.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 23 21:09:00 compute-1 ceph-mon[80135]: pgmap v827: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Nov 23 21:09:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:00.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:01 compute-1 nova_compute[230183]: 2025-11-23 21:09:01.371 230187 DEBUG nova.compute.manager [req-f5e5a756-9330-46b4-bf23-d6e4772e9394 req-00d8f286-fcfd-4713-98c9-2fa24d696cd3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-vif-unplugged-932faebb-b274-4e17-94a9-9339a27c275f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:09:01 compute-1 nova_compute[230183]: 2025-11-23 21:09:01.371 230187 DEBUG oslo_concurrency.lockutils [req-f5e5a756-9330-46b4-bf23-d6e4772e9394 req-00d8f286-fcfd-4713-98c9-2fa24d696cd3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:09:01 compute-1 nova_compute[230183]: 2025-11-23 21:09:01.371 230187 DEBUG oslo_concurrency.lockutils [req-f5e5a756-9330-46b4-bf23-d6e4772e9394 req-00d8f286-fcfd-4713-98c9-2fa24d696cd3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:09:01 compute-1 nova_compute[230183]: 2025-11-23 21:09:01.372 230187 DEBUG oslo_concurrency.lockutils [req-f5e5a756-9330-46b4-bf23-d6e4772e9394 req-00d8f286-fcfd-4713-98c9-2fa24d696cd3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:09:01 compute-1 nova_compute[230183]: 2025-11-23 21:09:01.372 230187 DEBUG nova.compute.manager [req-f5e5a756-9330-46b4-bf23-d6e4772e9394 req-00d8f286-fcfd-4713-98c9-2fa24d696cd3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] No waiting events found dispatching network-vif-unplugged-932faebb-b274-4e17-94a9-9339a27c275f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:09:01 compute-1 nova_compute[230183]: 2025-11-23 21:09:01.372 230187 DEBUG nova.compute.manager [req-f5e5a756-9330-46b4-bf23-d6e4772e9394 req-00d8f286-fcfd-4713-98c9-2fa24d696cd3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-vif-unplugged-932faebb-b274-4e17-94a9-9339a27c275f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 23 21:09:01 compute-1 nova_compute[230183]: 2025-11-23 21:09:01.372 230187 DEBUG nova.compute.manager [req-f5e5a756-9330-46b4-bf23-d6e4772e9394 req-00d8f286-fcfd-4713-98c9-2fa24d696cd3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-vif-plugged-932faebb-b274-4e17-94a9-9339a27c275f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:09:01 compute-1 nova_compute[230183]: 2025-11-23 21:09:01.372 230187 DEBUG oslo_concurrency.lockutils [req-f5e5a756-9330-46b4-bf23-d6e4772e9394 req-00d8f286-fcfd-4713-98c9-2fa24d696cd3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:09:01 compute-1 nova_compute[230183]: 2025-11-23 21:09:01.373 230187 DEBUG oslo_concurrency.lockutils [req-f5e5a756-9330-46b4-bf23-d6e4772e9394 req-00d8f286-fcfd-4713-98c9-2fa24d696cd3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:09:01 compute-1 nova_compute[230183]: 2025-11-23 21:09:01.373 230187 DEBUG oslo_concurrency.lockutils [req-f5e5a756-9330-46b4-bf23-d6e4772e9394 req-00d8f286-fcfd-4713-98c9-2fa24d696cd3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:09:01 compute-1 nova_compute[230183]: 2025-11-23 21:09:01.373 230187 DEBUG nova.compute.manager [req-f5e5a756-9330-46b4-bf23-d6e4772e9394 req-00d8f286-fcfd-4713-98c9-2fa24d696cd3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] No waiting events found dispatching network-vif-plugged-932faebb-b274-4e17-94a9-9339a27c275f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:09:01 compute-1 nova_compute[230183]: 2025-11-23 21:09:01.373 230187 WARNING nova.compute.manager [req-f5e5a756-9330-46b4-bf23-d6e4772e9394 req-00d8f286-fcfd-4713-98c9-2fa24d696cd3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received unexpected event network-vif-plugged-932faebb-b274-4e17-94a9-9339a27c275f for instance with vm_state active and task_state deleting.
Nov 23 21:09:01 compute-1 nova_compute[230183]: 2025-11-23 21:09:01.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:09:01 compute-1 nova_compute[230183]: 2025-11-23 21:09:01.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 23 21:09:01 compute-1 nova_compute[230183]: 2025-11-23 21:09:01.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 23 21:09:01 compute-1 nova_compute[230183]: 2025-11-23 21:09:01.440 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 23 21:09:01 compute-1 nova_compute[230183]: 2025-11-23 21:09:01.440 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 23 21:09:01 compute-1 nova_compute[230183]: 2025-11-23 21:09:01.440 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:09:01 compute-1 nova_compute[230183]: 2025-11-23 21:09:01.441 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:09:01 compute-1 nova_compute[230183]: 2025-11-23 21:09:01.463 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:09:01 compute-1 nova_compute[230183]: 2025-11-23 21:09:01.463 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:09:01 compute-1 nova_compute[230183]: 2025-11-23 21:09:01.464 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:09:01 compute-1 nova_compute[230183]: 2025-11-23 21:09:01.464 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:09:01 compute-1 nova_compute[230183]: 2025-11-23 21:09:01.464 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:09:01 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:09:01 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:09:01 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1716591767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:09:01 compute-1 nova_compute[230183]: 2025-11-23 21:09:01.941 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:09:01 compute-1 nova_compute[230183]: 2025-11-23 21:09:01.975 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:02 compute-1 nova_compute[230183]: 2025-11-23 21:09:02.040 230187 DEBUG nova.network.neutron [req-d53871ab-0dae-4a98-9f78-10bd66941538 req-672799ac-b621-4040-a298-a45dec597189 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Updated VIF entry in instance network info cache for port 932faebb-b274-4e17-94a9-9339a27c275f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 23 21:09:02 compute-1 nova_compute[230183]: 2025-11-23 21:09:02.040 230187 DEBUG nova.network.neutron [req-d53871ab-0dae-4a98-9f78-10bd66941538 req-672799ac-b621-4040-a298-a45dec597189 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Updating instance_info_cache with network_info: [{"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:09:02 compute-1 nova_compute[230183]: 2025-11-23 21:09:02.058 230187 DEBUG oslo_concurrency.lockutils [req-d53871ab-0dae-4a98-9f78-10bd66941538 req-672799ac-b621-4040-a298-a45dec597189 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:09:02 compute-1 nova_compute[230183]: 2025-11-23 21:09:02.105 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:09:02 compute-1 nova_compute[230183]: 2025-11-23 21:09:02.106 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4925MB free_disk=59.94853591918945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:09:02 compute-1 nova_compute[230183]: 2025-11-23 21:09:02.106 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:09:02 compute-1 nova_compute[230183]: 2025-11-23 21:09:02.106 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:09:02 compute-1 nova_compute[230183]: 2025-11-23 21:09:02.166 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Instance 451aa9f7-4cd0-413e-beed-8a30a8685ff1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 23 21:09:02 compute-1 nova_compute[230183]: 2025-11-23 21:09:02.167 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:09:02 compute-1 nova_compute[230183]: 2025-11-23 21:09:02.167 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:09:02 compute-1 nova_compute[230183]: 2025-11-23 21:09:02.196 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:09:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:09:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:02.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:09:02 compute-1 ceph-mon[80135]: pgmap v828: 337 pgs: 337 active+clean; 111 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 5.9 KiB/s wr, 3 op/s
Nov 23 21:09:02 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1716591767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:09:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:09:02 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2405935886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:09:02 compute-1 nova_compute[230183]: 2025-11-23 21:09:02.653 230187 DEBUG nova.network.neutron [-] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:09:02 compute-1 nova_compute[230183]: 2025-11-23 21:09:02.657 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:09:02 compute-1 nova_compute[230183]: 2025-11-23 21:09:02.664 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:09:02 compute-1 nova_compute[230183]: 2025-11-23 21:09:02.666 230187 INFO nova.compute.manager [-] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Took 2.32 seconds to deallocate network for instance.
Nov 23 21:09:02 compute-1 nova_compute[230183]: 2025-11-23 21:09:02.683 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:09:02 compute-1 nova_compute[230183]: 2025-11-23 21:09:02.708 230187 DEBUG oslo_concurrency.lockutils [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:09:02 compute-1 nova_compute[230183]: 2025-11-23 21:09:02.709 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 23 21:09:02 compute-1 nova_compute[230183]: 2025-11-23 21:09:02.710 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:09:02 compute-1 nova_compute[230183]: 2025-11-23 21:09:02.710 230187 DEBUG oslo_concurrency.lockutils [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:09:02 compute-1 nova_compute[230183]: 2025-11-23 21:09:02.749 230187 DEBUG oslo_concurrency.processutils [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:09:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:02.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:03 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:09:03 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4237980380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:09:03 compute-1 nova_compute[230183]: 2025-11-23 21:09:03.246 230187 DEBUG oslo_concurrency.processutils [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:09:03 compute-1 nova_compute[230183]: 2025-11-23 21:09:03.251 230187 DEBUG nova.compute.provider_tree [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:09:03 compute-1 nova_compute[230183]: 2025-11-23 21:09:03.269 230187 DEBUG nova.scheduler.client.report [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:09:03 compute-1 nova_compute[230183]: 2025-11-23 21:09:03.289 230187 DEBUG oslo_concurrency.lockutils [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:09:03 compute-1 nova_compute[230183]: 2025-11-23 21:09:03.310 230187 INFO nova.scheduler.client.report [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Deleted allocations for instance 451aa9f7-4cd0-413e-beed-8a30a8685ff1
Nov 23 21:09:03 compute-1 nova_compute[230183]: 2025-11-23 21:09:03.373 230187 DEBUG oslo_concurrency.lockutils [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:09:03 compute-1 nova_compute[230183]: 2025-11-23 21:09:03.448 230187 DEBUG nova.compute.manager [req-6f4138a2-c3c7-4d93-8684-4aed0ead9400 req-f250c723-8653-4c18-bd03-8d73e4b5e62d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-vif-deleted-932faebb-b274-4e17-94a9-9339a27c275f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:09:03 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2405935886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:09:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:09:03 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/4237980380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:09:03 compute-1 nova_compute[230183]: 2025-11-23 21:09:03.697 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:09:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:04.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:04 compute-1 sudo[235787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:09:04 compute-1 sudo[235787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:09:04 compute-1 sudo[235787]: pam_unix(sudo:session): session closed for user root
Nov 23 21:09:04 compute-1 ceph-mon[80135]: pgmap v829: 337 pgs: 337 active+clean; 111 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 1.2 KiB/s wr, 2 op/s
Nov 23 21:09:04 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3740722688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:09:04 compute-1 nova_compute[230183]: 2025-11-23 21:09:04.702 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:09:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:04.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:09:05 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/249885637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:09:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:06.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:06 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:09:06 compute-1 ceph-mon[80135]: pgmap v830: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Nov 23 21:09:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:06.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:07 compute-1 nova_compute[230183]: 2025-11-23 21:09:07.026 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:07 compute-1 nova_compute[230183]: 2025-11-23 21:09:07.373 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:07 compute-1 nova_compute[230183]: 2025-11-23 21:09:07.478 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:08.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:08 compute-1 ceph-mon[80135]: pgmap v831: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 23 21:09:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/2540947135' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:09:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/2540947135' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:09:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1052550188' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:09:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:08.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:09 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/77389808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:09:09 compute-1 nova_compute[230183]: 2025-11-23 21:09:09.705 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:10.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:10 compute-1 ceph-mon[80135]: pgmap v832: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 23 21:09:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:10.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:11 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:09:11 compute-1 nova_compute[230183]: 2025-11-23 21:09:11.835 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:11 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:11.836 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:09:11 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:11.836 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 23 21:09:12 compute-1 nova_compute[230183]: 2025-11-23 21:09:12.060 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:12.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:12 compute-1 ceph-mon[80135]: pgmap v833: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 23 21:09:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:12.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:14.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:14 compute-1 nova_compute[230183]: 2025-11-23 21:09:14.593 230187 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763932139.5925066, 451aa9f7-4cd0-413e-beed-8a30a8685ff1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:09:14 compute-1 nova_compute[230183]: 2025-11-23 21:09:14.594 230187 INFO nova.compute.manager [-] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] VM Stopped (Lifecycle Event)
Nov 23 21:09:14 compute-1 nova_compute[230183]: 2025-11-23 21:09:14.614 230187 DEBUG nova.compute.manager [None req-4bbdd229-5301-40e6-9baf-078e4cdcaa05 - - - - - -] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:09:14 compute-1 ceph-mon[80135]: pgmap v834: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 938 B/s wr, 26 op/s
Nov 23 21:09:14 compute-1 nova_compute[230183]: 2025-11-23 21:09:14.746 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:14.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:16.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:16 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:09:16 compute-1 ceph-mon[80135]: pgmap v835: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 938 B/s wr, 27 op/s
Nov 23 21:09:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:16.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:17 compute-1 nova_compute[230183]: 2025-11-23 21:09:17.061 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:17 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:17.838 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:09:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:18.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:18 compute-1 podman[235821]: 2025-11-23 21:09:18.668917392 +0000 UTC m=+0.061252314 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 21:09:18 compute-1 ceph-mon[80135]: pgmap v836: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Nov 23 21:09:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:09:18 compute-1 podman[235820]: 2025-11-23 21:09:18.688820893 +0000 UTC m=+0.093208667 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Nov 23 21:09:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:18.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:19 compute-1 ceph-mon[80135]: pgmap v837: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 21:09:19 compute-1 nova_compute[230183]: 2025-11-23 21:09:19.760 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:20 compute-1 sshd-session[235864]: Invalid user ubuntu from 92.118.39.92 port 44230
Nov 23 21:09:20 compute-1 sshd-session[235864]: Connection closed by invalid user ubuntu 92.118.39.92 port 44230 [preauth]
Nov 23 21:09:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:09:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:20.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:09:20 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2485713192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:09:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:09:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:20.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:09:21 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:09:21 compute-1 ceph-mon[80135]: pgmap v838: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Nov 23 21:09:22 compute-1 nova_compute[230183]: 2025-11-23 21:09:22.112 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:09:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:22.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:09:22 compute-1 podman[235870]: 2025-11-23 21:09:22.694264781 +0000 UTC m=+0.096678989 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 21:09:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:22.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:24 compute-1 ceph-mon[80135]: pgmap v839: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Nov 23 21:09:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:24.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:24 compute-1 sudo[235892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:09:24 compute-1 sudo[235892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:09:24 compute-1 sudo[235892]: pam_unix(sudo:session): session closed for user root
Nov 23 21:09:24 compute-1 nova_compute[230183]: 2025-11-23 21:09:24.763 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:24.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:26.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:26 compute-1 ceph-mon[80135]: pgmap v840: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 23 21:09:26 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1642653001' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:09:26 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1869792718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:09:26 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:09:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:26.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:27 compute-1 nova_compute[230183]: 2025-11-23 21:09:27.114 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:28.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:28 compute-1 ceph-mon[80135]: pgmap v841: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 23 21:09:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:28.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:29 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2717229072' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:09:29 compute-1 nova_compute[230183]: 2025-11-23 21:09:29.766 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:30.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:30 compute-1 ceph-mon[80135]: pgmap v842: 337 pgs: 337 active+clean; 84 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 23 21:09:30 compute-1 sudo[235920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:09:30 compute-1 sudo[235920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:09:30 compute-1 sudo[235920]: pam_unix(sudo:session): session closed for user root
Nov 23 21:09:30 compute-1 nova_compute[230183]: 2025-11-23 21:09:30.540 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:09:30 compute-1 nova_compute[230183]: 2025-11-23 21:09:30.540 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:09:30 compute-1 nova_compute[230183]: 2025-11-23 21:09:30.553 230187 DEBUG nova.compute.manager [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 23 21:09:30 compute-1 sudo[235945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 21:09:30 compute-1 sudo[235945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:09:30 compute-1 nova_compute[230183]: 2025-11-23 21:09:30.615 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:09:30 compute-1 nova_compute[230183]: 2025-11-23 21:09:30.616 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:09:30 compute-1 nova_compute[230183]: 2025-11-23 21:09:30.622 230187 DEBUG nova.virt.hardware [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 23 21:09:30 compute-1 nova_compute[230183]: 2025-11-23 21:09:30.622 230187 INFO nova.compute.claims [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Claim successful on node compute-1.ctlplane.example.com
Nov 23 21:09:30 compute-1 nova_compute[230183]: 2025-11-23 21:09:30.710 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:09:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:09:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:30.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:09:31 compute-1 sudo[235945]: pam_unix(sudo:session): session closed for user root
Nov 23 21:09:31 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:09:31 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2351416075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.171 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.179 230187 DEBUG nova.compute.provider_tree [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.196 230187 DEBUG nova.scheduler.client.report [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.221 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.222 230187 DEBUG nova.compute.manager [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.268 230187 INFO nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.271 230187 DEBUG nova.compute.manager [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.272 230187 DEBUG nova.network.neutron [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.287 230187 DEBUG nova.compute.manager [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 23 21:09:31 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2351416075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:09:31 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:09:31 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 21:09:31 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:09:31 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:09:31 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 21:09:31 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 21:09:31 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.379 230187 DEBUG nova.compute.manager [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.380 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.381 230187 INFO nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Creating image(s)
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.403 230187 DEBUG nova.storage.rbd_utils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.426 230187 DEBUG nova.storage.rbd_utils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.451 230187 DEBUG nova.storage.rbd_utils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.453 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.471 230187 DEBUG nova.policy [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9fb5352c62684f2ba3a326a953a10dfe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '782593db60784ab8bff41fe87d72ff5f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.506 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.507 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.507 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.507 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.530 230187 DEBUG nova.storage.rbd_utils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:09:31 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.534 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.795 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.865 230187 DEBUG nova.storage.rbd_utils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] resizing rbd image 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.982 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.983 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Ensure instance console log exists: /var/lib/nova/instances/227fff00-2bf2-4d7a-9ee7-ff4eaddc0880/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.984 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.984 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:09:31 compute-1 nova_compute[230183]: 2025-11-23 21:09:31.985 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:09:32 compute-1 nova_compute[230183]: 2025-11-23 21:09:32.116 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:32.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:32 compute-1 ceph-mon[80135]: pgmap v843: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Nov 23 21:09:32 compute-1 nova_compute[230183]: 2025-11-23 21:09:32.538 230187 DEBUG nova.network.neutron [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Successfully created port: 540c04be-373c-41ca-adee-2010345a34df _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 23 21:09:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:32.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:09:33 compute-1 nova_compute[230183]: 2025-11-23 21:09:33.583 230187 DEBUG nova.network.neutron [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Successfully updated port: 540c04be-373c-41ca-adee-2010345a34df _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 23 21:09:33 compute-1 nova_compute[230183]: 2025-11-23 21:09:33.598 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:09:33 compute-1 nova_compute[230183]: 2025-11-23 21:09:33.598 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquired lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:09:33 compute-1 nova_compute[230183]: 2025-11-23 21:09:33.598 230187 DEBUG nova.network.neutron [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 23 21:09:33 compute-1 nova_compute[230183]: 2025-11-23 21:09:33.662 230187 DEBUG nova.compute.manager [req-96e14edf-44dd-46e5-b9e2-8a008bd77282 req-4fc7f48b-c06c-47cd-ad29-58f8cd17afc9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Received event network-changed-540c04be-373c-41ca-adee-2010345a34df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:09:33 compute-1 nova_compute[230183]: 2025-11-23 21:09:33.662 230187 DEBUG nova.compute.manager [req-96e14edf-44dd-46e5-b9e2-8a008bd77282 req-4fc7f48b-c06c-47cd-ad29-58f8cd17afc9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Refreshing instance network info cache due to event network-changed-540c04be-373c-41ca-adee-2010345a34df. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 23 21:09:33 compute-1 nova_compute[230183]: 2025-11-23 21:09:33.662 230187 DEBUG oslo_concurrency.lockutils [req-96e14edf-44dd-46e5-b9e2-8a008bd77282 req-4fc7f48b-c06c-47cd-ad29-58f8cd17afc9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:09:33 compute-1 nova_compute[230183]: 2025-11-23 21:09:33.879 230187 DEBUG nova.network.neutron [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 23 21:09:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:34.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:34 compute-1 ceph-mon[80135]: pgmap v844: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Nov 23 21:09:34 compute-1 nova_compute[230183]: 2025-11-23 21:09:34.498 230187 DEBUG nova.network.neutron [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Updating instance_info_cache with network_info: [{"id": "540c04be-373c-41ca-adee-2010345a34df", "address": "fa:16:3e:9d:e3:b7", "network": {"id": "6ff6a2ba-50a1-444b-9685-151db9bcac89", "bridge": "br-int", "label": "tempest-network-smoke--285822202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540c04be-37", "ovs_interfaceid": "540c04be-373c-41ca-adee-2010345a34df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:09:34 compute-1 nova_compute[230183]: 2025-11-23 21:09:34.524 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Releasing lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:09:34 compute-1 nova_compute[230183]: 2025-11-23 21:09:34.524 230187 DEBUG nova.compute.manager [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Instance network_info: |[{"id": "540c04be-373c-41ca-adee-2010345a34df", "address": "fa:16:3e:9d:e3:b7", "network": {"id": "6ff6a2ba-50a1-444b-9685-151db9bcac89", "bridge": "br-int", "label": "tempest-network-smoke--285822202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540c04be-37", "ovs_interfaceid": "540c04be-373c-41ca-adee-2010345a34df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 23 21:09:34 compute-1 nova_compute[230183]: 2025-11-23 21:09:34.525 230187 DEBUG oslo_concurrency.lockutils [req-96e14edf-44dd-46e5-b9e2-8a008bd77282 req-4fc7f48b-c06c-47cd-ad29-58f8cd17afc9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:09:34 compute-1 nova_compute[230183]: 2025-11-23 21:09:34.525 230187 DEBUG nova.network.neutron [req-96e14edf-44dd-46e5-b9e2-8a008bd77282 req-4fc7f48b-c06c-47cd-ad29-58f8cd17afc9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Refreshing network info cache for port 540c04be-373c-41ca-adee-2010345a34df _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 23 21:09:34 compute-1 nova_compute[230183]: 2025-11-23 21:09:34.528 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Start _get_guest_xml network_info=[{"id": "540c04be-373c-41ca-adee-2010345a34df", "address": "fa:16:3e:9d:e3:b7", "network": {"id": "6ff6a2ba-50a1-444b-9685-151db9bcac89", "bridge": "br-int", "label": "tempest-network-smoke--285822202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540c04be-37", "ovs_interfaceid": "540c04be-373c-41ca-adee-2010345a34df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'guest_format': None, 'encrypted': False, 'size': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': '3c45fa6c-8a99-4359-a34e-d89f4e1e77d0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 23 21:09:34 compute-1 nova_compute[230183]: 2025-11-23 21:09:34.531 230187 WARNING nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:09:34 compute-1 nova_compute[230183]: 2025-11-23 21:09:34.538 230187 DEBUG nova.virt.libvirt.host [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 23 21:09:34 compute-1 nova_compute[230183]: 2025-11-23 21:09:34.538 230187 DEBUG nova.virt.libvirt.host [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 23 21:09:34 compute-1 nova_compute[230183]: 2025-11-23 21:09:34.544 230187 DEBUG nova.virt.libvirt.host [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 23 21:09:34 compute-1 nova_compute[230183]: 2025-11-23 21:09:34.544 230187 DEBUG nova.virt.libvirt.host [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 23 21:09:34 compute-1 nova_compute[230183]: 2025-11-23 21:09:34.545 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 23 21:09:34 compute-1 nova_compute[230183]: 2025-11-23 21:09:34.545 230187 DEBUG nova.virt.hardware [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-23T21:05:42Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='56044b93-2979-48aa-b67f-c37e1b489306',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 23 21:09:34 compute-1 nova_compute[230183]: 2025-11-23 21:09:34.545 230187 DEBUG nova.virt.hardware [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 23 21:09:34 compute-1 nova_compute[230183]: 2025-11-23 21:09:34.545 230187 DEBUG nova.virt.hardware [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 23 21:09:34 compute-1 nova_compute[230183]: 2025-11-23 21:09:34.546 230187 DEBUG nova.virt.hardware [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 23 21:09:34 compute-1 nova_compute[230183]: 2025-11-23 21:09:34.546 230187 DEBUG nova.virt.hardware [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 23 21:09:34 compute-1 nova_compute[230183]: 2025-11-23 21:09:34.546 230187 DEBUG nova.virt.hardware [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 23 21:09:34 compute-1 nova_compute[230183]: 2025-11-23 21:09:34.546 230187 DEBUG nova.virt.hardware [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 23 21:09:34 compute-1 nova_compute[230183]: 2025-11-23 21:09:34.546 230187 DEBUG nova.virt.hardware [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 23 21:09:34 compute-1 nova_compute[230183]: 2025-11-23 21:09:34.547 230187 DEBUG nova.virt.hardware [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 23 21:09:34 compute-1 nova_compute[230183]: 2025-11-23 21:09:34.547 230187 DEBUG nova.virt.hardware [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 23 21:09:34 compute-1 nova_compute[230183]: 2025-11-23 21:09:34.547 230187 DEBUG nova.virt.hardware [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 23 21:09:34 compute-1 nova_compute[230183]: 2025-11-23 21:09:34.549 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:09:34 compute-1 nova_compute[230183]: 2025-11-23 21:09:34.768 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:09:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:34.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:09:35 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 21:09:35 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2460303140' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.024 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.043 230187 DEBUG nova.storage.rbd_utils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.046 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:09:35 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2460303140' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:09:35 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 21:09:35 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/935451246' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.490 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.492 230187 DEBUG nova.virt.libvirt.vif [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='',created_at=2025-11-23T21:09:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-626843533',display_name='tempest-TestNetworkBasicOps-server-626843533',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-626843533',id=4,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBzFKgfz1QVXAYBgw9WYLDmImQIyNZIUJvYaUSeZsmfvEoA7CUytAymkLL0tqBwm8cJVrzUl6E9R6D/qdooFrc51SiAGOyjiHvRBM9c3gaFOzuWbTw1Aa3lZ7MmCQiSUEQ==',key_name='tempest-TestNetworkBasicOps-1952591884',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-mabh37mo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:09:31Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=227fff00-2bf2-4d7a-9ee7-ff4eaddc0880,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "540c04be-373c-41ca-adee-2010345a34df", "address": "fa:16:3e:9d:e3:b7", "network": {"id": "6ff6a2ba-50a1-444b-9685-151db9bcac89", "bridge": "br-int", "label": "tempest-network-smoke--285822202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540c04be-37", "ovs_interfaceid": "540c04be-373c-41ca-adee-2010345a34df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.492 230187 DEBUG nova.network.os_vif_util [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "540c04be-373c-41ca-adee-2010345a34df", "address": "fa:16:3e:9d:e3:b7", "network": {"id": "6ff6a2ba-50a1-444b-9685-151db9bcac89", "bridge": "br-int", "label": "tempest-network-smoke--285822202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540c04be-37", "ovs_interfaceid": "540c04be-373c-41ca-adee-2010345a34df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.493 230187 DEBUG nova.network.os_vif_util [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e3:b7,bridge_name='br-int',has_traffic_filtering=True,id=540c04be-373c-41ca-adee-2010345a34df,network=Network(6ff6a2ba-50a1-444b-9685-151db9bcac89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap540c04be-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.495 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] End _get_guest_xml xml=<domain type="kvm">
Nov 23 21:09:35 compute-1 nova_compute[230183]:   <uuid>227fff00-2bf2-4d7a-9ee7-ff4eaddc0880</uuid>
Nov 23 21:09:35 compute-1 nova_compute[230183]:   <name>instance-00000004</name>
Nov 23 21:09:35 compute-1 nova_compute[230183]:   <memory>131072</memory>
Nov 23 21:09:35 compute-1 nova_compute[230183]:   <vcpu>1</vcpu>
Nov 23 21:09:35 compute-1 nova_compute[230183]:   <metadata>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <nova:name>tempest-TestNetworkBasicOps-server-626843533</nova:name>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <nova:creationTime>2025-11-23 21:09:34</nova:creationTime>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <nova:flavor name="m1.nano">
Nov 23 21:09:35 compute-1 nova_compute[230183]:         <nova:memory>128</nova:memory>
Nov 23 21:09:35 compute-1 nova_compute[230183]:         <nova:disk>1</nova:disk>
Nov 23 21:09:35 compute-1 nova_compute[230183]:         <nova:swap>0</nova:swap>
Nov 23 21:09:35 compute-1 nova_compute[230183]:         <nova:ephemeral>0</nova:ephemeral>
Nov 23 21:09:35 compute-1 nova_compute[230183]:         <nova:vcpus>1</nova:vcpus>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       </nova:flavor>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <nova:owner>
Nov 23 21:09:35 compute-1 nova_compute[230183]:         <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 21:09:35 compute-1 nova_compute[230183]:         <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       </nova:owner>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <nova:ports>
Nov 23 21:09:35 compute-1 nova_compute[230183]:         <nova:port uuid="540c04be-373c-41ca-adee-2010345a34df">
Nov 23 21:09:35 compute-1 nova_compute[230183]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:         </nova:port>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       </nova:ports>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     </nova:instance>
Nov 23 21:09:35 compute-1 nova_compute[230183]:   </metadata>
Nov 23 21:09:35 compute-1 nova_compute[230183]:   <sysinfo type="smbios">
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <system>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <entry name="manufacturer">RDO</entry>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <entry name="product">OpenStack Compute</entry>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <entry name="serial">227fff00-2bf2-4d7a-9ee7-ff4eaddc0880</entry>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <entry name="uuid">227fff00-2bf2-4d7a-9ee7-ff4eaddc0880</entry>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <entry name="family">Virtual Machine</entry>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     </system>
Nov 23 21:09:35 compute-1 nova_compute[230183]:   </sysinfo>
Nov 23 21:09:35 compute-1 nova_compute[230183]:   <os>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <boot dev="hd"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <smbios mode="sysinfo"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:   </os>
Nov 23 21:09:35 compute-1 nova_compute[230183]:   <features>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <acpi/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <apic/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <vmcoreinfo/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:   </features>
Nov 23 21:09:35 compute-1 nova_compute[230183]:   <clock offset="utc">
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <timer name="pit" tickpolicy="delay"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <timer name="hpet" present="no"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:   </clock>
Nov 23 21:09:35 compute-1 nova_compute[230183]:   <cpu mode="host-model" match="exact">
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <topology sockets="1" cores="1" threads="1"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:   </cpu>
Nov 23 21:09:35 compute-1 nova_compute[230183]:   <devices>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <disk type="network" device="disk">
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <driver type="raw" cache="none"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <source protocol="rbd" name="vms/227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk">
Nov 23 21:09:35 compute-1 nova_compute[230183]:         <host name="192.168.122.100" port="6789"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:         <host name="192.168.122.102" port="6789"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:         <host name="192.168.122.101" port="6789"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       </source>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <auth username="openstack">
Nov 23 21:09:35 compute-1 nova_compute[230183]:         <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <target dev="vda" bus="virtio"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <disk type="network" device="cdrom">
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <driver type="raw" cache="none"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <source protocol="rbd" name="vms/227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk.config">
Nov 23 21:09:35 compute-1 nova_compute[230183]:         <host name="192.168.122.100" port="6789"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:         <host name="192.168.122.102" port="6789"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:         <host name="192.168.122.101" port="6789"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       </source>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <auth username="openstack">
Nov 23 21:09:35 compute-1 nova_compute[230183]:         <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <target dev="sda" bus="sata"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <interface type="ethernet">
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <mac address="fa:16:3e:9d:e3:b7"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <model type="virtio"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <driver name="vhost" rx_queue_size="512"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <mtu size="1442"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <target dev="tap540c04be-37"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     </interface>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <serial type="pty">
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <log file="/var/lib/nova/instances/227fff00-2bf2-4d7a-9ee7-ff4eaddc0880/console.log" append="off"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     </serial>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <video>
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <model type="virtio"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     </video>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <input type="tablet" bus="usb"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <rng model="virtio">
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <backend model="random">/dev/urandom</backend>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     </rng>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <controller type="usb" index="0"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     <memballoon model="virtio">
Nov 23 21:09:35 compute-1 nova_compute[230183]:       <stats period="10"/>
Nov 23 21:09:35 compute-1 nova_compute[230183]:     </memballoon>
Nov 23 21:09:35 compute-1 nova_compute[230183]:   </devices>
Nov 23 21:09:35 compute-1 nova_compute[230183]: </domain>
Nov 23 21:09:35 compute-1 nova_compute[230183]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.497 230187 DEBUG nova.compute.manager [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Preparing to wait for external event network-vif-plugged-540c04be-373c-41ca-adee-2010345a34df prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.497 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.497 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.497 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.498 230187 DEBUG nova.virt.libvirt.vif [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='',created_at=2025-11-23T21:09:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-626843533',display_name='tempest-TestNetworkBasicOps-server-626843533',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-626843533',id=4,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBzFKgfz1QVXAYBgw9WYLDmImQIyNZIUJvYaUSeZsmfvEoA7CUytAymkLL0tqBwm8cJVrzUl6E9R6D/qdooFrc51SiAGOyjiHvRBM9c3gaFOzuWbTw1Aa3lZ7MmCQiSUEQ==',key_name='tempest-TestNetworkBasicOps-1952591884',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-mabh37mo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:09:31Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=227fff00-2bf2-4d7a-9ee7-ff4eaddc0880,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "540c04be-373c-41ca-adee-2010345a34df", "address": "fa:16:3e:9d:e3:b7", "network": {"id": "6ff6a2ba-50a1-444b-9685-151db9bcac89", "bridge": "br-int", "label": "tempest-network-smoke--285822202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540c04be-37", "ovs_interfaceid": "540c04be-373c-41ca-adee-2010345a34df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.498 230187 DEBUG nova.network.os_vif_util [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "540c04be-373c-41ca-adee-2010345a34df", "address": "fa:16:3e:9d:e3:b7", "network": {"id": "6ff6a2ba-50a1-444b-9685-151db9bcac89", "bridge": "br-int", "label": "tempest-network-smoke--285822202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540c04be-37", "ovs_interfaceid": "540c04be-373c-41ca-adee-2010345a34df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.499 230187 DEBUG nova.network.os_vif_util [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e3:b7,bridge_name='br-int',has_traffic_filtering=True,id=540c04be-373c-41ca-adee-2010345a34df,network=Network(6ff6a2ba-50a1-444b-9685-151db9bcac89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap540c04be-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.499 230187 DEBUG os_vif [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e3:b7,bridge_name='br-int',has_traffic_filtering=True,id=540c04be-373c-41ca-adee-2010345a34df,network=Network(6ff6a2ba-50a1-444b-9685-151db9bcac89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap540c04be-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.499 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.500 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.500 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.503 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.503 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap540c04be-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.503 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap540c04be-37, col_values=(('external_ids', {'iface-id': '540c04be-373c-41ca-adee-2010345a34df', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:e3:b7', 'vm-uuid': '227fff00-2bf2-4d7a-9ee7-ff4eaddc0880'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.504 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:35 compute-1 NetworkManager[49021]: <info>  [1763932175.5056] manager: (tap540c04be-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.507 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.510 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.510 230187 INFO os_vif [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e3:b7,bridge_name='br-int',has_traffic_filtering=True,id=540c04be-373c-41ca-adee-2010345a34df,network=Network(6ff6a2ba-50a1-444b-9685-151db9bcac89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap540c04be-37')
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.560 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.560 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.561 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No VIF found with MAC fa:16:3e:9d:e3:b7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.561 230187 INFO nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Using config drive
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.582 230187 DEBUG nova.storage.rbd_utils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.731 230187 DEBUG nova.network.neutron [req-96e14edf-44dd-46e5-b9e2-8a008bd77282 req-4fc7f48b-c06c-47cd-ad29-58f8cd17afc9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Updated VIF entry in instance network info cache for port 540c04be-373c-41ca-adee-2010345a34df. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.732 230187 DEBUG nova.network.neutron [req-96e14edf-44dd-46e5-b9e2-8a008bd77282 req-4fc7f48b-c06c-47cd-ad29-58f8cd17afc9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Updating instance_info_cache with network_info: [{"id": "540c04be-373c-41ca-adee-2010345a34df", "address": "fa:16:3e:9d:e3:b7", "network": {"id": "6ff6a2ba-50a1-444b-9685-151db9bcac89", "bridge": "br-int", "label": "tempest-network-smoke--285822202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540c04be-37", "ovs_interfaceid": "540c04be-373c-41ca-adee-2010345a34df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:09:35 compute-1 nova_compute[230183]: 2025-11-23 21:09:35.745 230187 DEBUG oslo_concurrency.lockutils [req-96e14edf-44dd-46e5-b9e2-8a008bd77282 req-4fc7f48b-c06c-47cd-ad29-58f8cd17afc9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:09:35 compute-1 sudo[236273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 21:09:35 compute-1 sudo[236273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:09:35 compute-1 sudo[236273]: pam_unix(sudo:session): session closed for user root
Nov 23 21:09:36 compute-1 nova_compute[230183]: 2025-11-23 21:09:36.089 230187 INFO nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Creating config drive at /var/lib/nova/instances/227fff00-2bf2-4d7a-9ee7-ff4eaddc0880/disk.config
Nov 23 21:09:36 compute-1 nova_compute[230183]: 2025-11-23 21:09:36.097 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/227fff00-2bf2-4d7a-9ee7-ff4eaddc0880/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvtv24q34 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:09:36 compute-1 nova_compute[230183]: 2025-11-23 21:09:36.240 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/227fff00-2bf2-4d7a-9ee7-ff4eaddc0880/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvtv24q34" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:09:36 compute-1 nova_compute[230183]: 2025-11-23 21:09:36.272 230187 DEBUG nova.storage.rbd_utils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:09:36 compute-1 nova_compute[230183]: 2025-11-23 21:09:36.277 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/227fff00-2bf2-4d7a-9ee7-ff4eaddc0880/disk.config 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:09:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:09:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:36.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:09:36 compute-1 ceph-mon[80135]: pgmap v845: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 3.6 MiB/s wr, 86 op/s
Nov 23 21:09:36 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/935451246' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:09:36 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:09:36 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:09:36 compute-1 nova_compute[230183]: 2025-11-23 21:09:36.461 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/227fff00-2bf2-4d7a-9ee7-ff4eaddc0880/disk.config 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:09:36 compute-1 nova_compute[230183]: 2025-11-23 21:09:36.462 230187 INFO nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Deleting local config drive /var/lib/nova/instances/227fff00-2bf2-4d7a-9ee7-ff4eaddc0880/disk.config because it was imported into RBD.
Nov 23 21:09:36 compute-1 kernel: tap540c04be-37: entered promiscuous mode
Nov 23 21:09:36 compute-1 ovn_controller[132845]: 2025-11-23T21:09:36Z|00064|binding|INFO|Claiming lport 540c04be-373c-41ca-adee-2010345a34df for this chassis.
Nov 23 21:09:36 compute-1 ovn_controller[132845]: 2025-11-23T21:09:36Z|00065|binding|INFO|540c04be-373c-41ca-adee-2010345a34df: Claiming fa:16:3e:9d:e3:b7 10.100.0.11
Nov 23 21:09:36 compute-1 nova_compute[230183]: 2025-11-23 21:09:36.516 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:36 compute-1 NetworkManager[49021]: <info>  [1763932176.5173] manager: (tap540c04be-37): new Tun device (/org/freedesktop/NetworkManager/Devices/43)
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.529 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:e3:b7 10.100.0.11'], port_security=['fa:16:3e:9d:e3:b7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '227fff00-2bf2-4d7a-9ee7-ff4eaddc0880', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ff6a2ba-50a1-444b-9685-151db9bcac89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '20b5a6ce-6e21-4158-a0ab-eaca16146e81', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c0604ff-606a-413a-88a2-c316eba90e56, chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=540c04be-373c-41ca-adee-2010345a34df) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.531 142158 INFO neutron.agent.ovn.metadata.agent [-] Port 540c04be-373c-41ca-adee-2010345a34df in datapath 6ff6a2ba-50a1-444b-9685-151db9bcac89 bound to our chassis
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.532 142158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6ff6a2ba-50a1-444b-9685-151db9bcac89
Nov 23 21:09:36 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.544 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[88ec1ad7-6ab7-4a4c-a8f4-de1832cfe6e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.546 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6ff6a2ba-51 in ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 23 21:09:36 compute-1 systemd-machined[193469]: New machine qemu-3-instance-00000004.
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.548 233901 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6ff6a2ba-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.548 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed45358-e343-4785-94c3-586d5ee0ed1c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.549 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[4669f84a-f4b2-42f6-b5e6-2b0f0bbd7aab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:09:36 compute-1 systemd-udevd[236352]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.563 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[6fd11e1b-7f3b-4f70-9b55-c16cfe9d17ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:09:36 compute-1 NetworkManager[49021]: <info>  [1763932176.5673] device (tap540c04be-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 21:09:36 compute-1 NetworkManager[49021]: <info>  [1763932176.5681] device (tap540c04be-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 23 21:09:36 compute-1 systemd[1]: Started Virtual Machine qemu-3-instance-00000004.
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.587 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[45ee7ca1-5351-47cc-ae0a-a707a14fe496]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:09:36 compute-1 nova_compute[230183]: 2025-11-23 21:09:36.588 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:36 compute-1 ovn_controller[132845]: 2025-11-23T21:09:36Z|00066|binding|INFO|Setting lport 540c04be-373c-41ca-adee-2010345a34df ovn-installed in OVS
Nov 23 21:09:36 compute-1 ovn_controller[132845]: 2025-11-23T21:09:36Z|00067|binding|INFO|Setting lport 540c04be-373c-41ca-adee-2010345a34df up in Southbound
Nov 23 21:09:36 compute-1 nova_compute[230183]: 2025-11-23 21:09:36.598 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.620 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[61959e40-4a25-430a-afd4-9eaa5dfdd0da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:09:36 compute-1 systemd-udevd[236355]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 21:09:36 compute-1 NetworkManager[49021]: <info>  [1763932176.6260] manager: (tap6ff6a2ba-50): new Veth device (/org/freedesktop/NetworkManager/Devices/44)
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.625 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[4e26efef-3140-43f9-91fc-c229051f77da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.657 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[b8328f13-f00b-4362-a30a-5ef4f2ff091d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.660 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[70050365-4fe1-4f7e-9ba7-291f84e7b5fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:09:36 compute-1 NetworkManager[49021]: <info>  [1763932176.6812] device (tap6ff6a2ba-50): carrier: link connected
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.685 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[adff457c-5ccc-4381-a239-d3b0a7571fdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.699 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[4b33a3ea-b029-424a-938f-da6570b8221b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ff6a2ba-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:e0:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 412525, 'reachable_time': 38939, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 236384, 'error': None, 'target': 'ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.712 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[101cec63-1e81-41ad-8d7c-4a8dda3be381]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1b:e098'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 412525, 'tstamp': 412525}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 236385, 'error': None, 'target': 'ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.724 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[476fd3bf-1dce-458b-9c94-4e64eba979e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ff6a2ba-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:e0:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 412525, 'reachable_time': 38939, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 236386, 'error': None, 'target': 'ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.750 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[6451b22f-a137-4f08-9944-916576f12a18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:09:36 compute-1 nova_compute[230183]: 2025-11-23 21:09:36.772 230187 DEBUG nova.compute.manager [req-939af7b8-45dc-4ccb-bfe7-97d1dff9730c req-f6569b54-7dae-47e2-b75e-349224716fb9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Received event network-vif-plugged-540c04be-373c-41ca-adee-2010345a34df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:09:36 compute-1 nova_compute[230183]: 2025-11-23 21:09:36.772 230187 DEBUG oslo_concurrency.lockutils [req-939af7b8-45dc-4ccb-bfe7-97d1dff9730c req-f6569b54-7dae-47e2-b75e-349224716fb9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:09:36 compute-1 nova_compute[230183]: 2025-11-23 21:09:36.773 230187 DEBUG oslo_concurrency.lockutils [req-939af7b8-45dc-4ccb-bfe7-97d1dff9730c req-f6569b54-7dae-47e2-b75e-349224716fb9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:09:36 compute-1 nova_compute[230183]: 2025-11-23 21:09:36.773 230187 DEBUG oslo_concurrency.lockutils [req-939af7b8-45dc-4ccb-bfe7-97d1dff9730c req-f6569b54-7dae-47e2-b75e-349224716fb9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:09:36 compute-1 nova_compute[230183]: 2025-11-23 21:09:36.773 230187 DEBUG nova.compute.manager [req-939af7b8-45dc-4ccb-bfe7-97d1dff9730c req-f6569b54-7dae-47e2-b75e-349224716fb9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Processing event network-vif-plugged-540c04be-373c-41ca-adee-2010345a34df _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.812 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[82b8287c-7a80-4bde-86c3-0bb567aa7575]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.814 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ff6a2ba-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.814 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.814 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ff6a2ba-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:09:36 compute-1 kernel: tap6ff6a2ba-50: entered promiscuous mode
Nov 23 21:09:36 compute-1 nova_compute[230183]: 2025-11-23 21:09:36.816 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:36 compute-1 NetworkManager[49021]: <info>  [1763932176.8166] manager: (tap6ff6a2ba-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.818 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6ff6a2ba-50, col_values=(('external_ids', {'iface-id': '4bff4598-93d2-442e-90fe-19336d84eb93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:09:36 compute-1 ovn_controller[132845]: 2025-11-23T21:09:36Z|00068|binding|INFO|Releasing lport 4bff4598-93d2-442e-90fe-19336d84eb93 from this chassis (sb_readonly=0)
Nov 23 21:09:36 compute-1 nova_compute[230183]: 2025-11-23 21:09:36.831 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.832 142158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6ff6a2ba-50a1-444b-9685-151db9bcac89.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6ff6a2ba-50a1-444b-9685-151db9bcac89.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.833 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b03d21-471d-46db-91e0-01bc1f0dea91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.834 142158 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: global
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]:     log         /dev/log local0 debug
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]:     log-tag     haproxy-metadata-proxy-6ff6a2ba-50a1-444b-9685-151db9bcac89
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]:     user        root
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]:     group       root
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]:     maxconn     1024
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]:     pidfile     /var/lib/neutron/external/pids/6ff6a2ba-50a1-444b-9685-151db9bcac89.pid.haproxy
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]:     daemon
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: defaults
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]:     log global
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]:     mode http
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]:     option httplog
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]:     option dontlognull
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]:     option http-server-close
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]:     option forwardfor
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]:     retries                 3
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]:     timeout http-request    30s
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]:     timeout connect         30s
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]:     timeout client          32s
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]:     timeout server          32s
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]:     timeout http-keep-alive 30s
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: listen listener
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]:     bind 169.254.169.254:80
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]:     server metadata /var/lib/neutron/metadata_proxy
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]:     http-request add-header X-OVN-Network-ID 6ff6a2ba-50a1-444b-9685-151db9bcac89
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 23 21:09:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.834 142158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89', 'env', 'PROCESS_TAG=haproxy-6ff6a2ba-50a1-444b-9685-151db9bcac89', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6ff6a2ba-50a1-444b-9685-151db9bcac89.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 23 21:09:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:36.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:36 compute-1 nova_compute[230183]: 2025-11-23 21:09:36.996 230187 DEBUG nova.compute.manager [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 23 21:09:36 compute-1 nova_compute[230183]: 2025-11-23 21:09:36.997 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932176.9959097, 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:09:36 compute-1 nova_compute[230183]: 2025-11-23 21:09:36.997 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] VM Started (Lifecycle Event)
Nov 23 21:09:37 compute-1 nova_compute[230183]: 2025-11-23 21:09:37.003 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 23 21:09:37 compute-1 nova_compute[230183]: 2025-11-23 21:09:37.006 230187 INFO nova.virt.libvirt.driver [-] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Instance spawned successfully.
Nov 23 21:09:37 compute-1 nova_compute[230183]: 2025-11-23 21:09:37.007 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 23 21:09:37 compute-1 nova_compute[230183]: 2025-11-23 21:09:37.027 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:09:37 compute-1 nova_compute[230183]: 2025-11-23 21:09:37.033 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 23 21:09:37 compute-1 nova_compute[230183]: 2025-11-23 21:09:37.038 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:09:37 compute-1 nova_compute[230183]: 2025-11-23 21:09:37.038 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:09:37 compute-1 nova_compute[230183]: 2025-11-23 21:09:37.039 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:09:37 compute-1 nova_compute[230183]: 2025-11-23 21:09:37.039 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:09:37 compute-1 nova_compute[230183]: 2025-11-23 21:09:37.039 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:09:37 compute-1 nova_compute[230183]: 2025-11-23 21:09:37.040 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:09:37 compute-1 nova_compute[230183]: 2025-11-23 21:09:37.077 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 23 21:09:37 compute-1 nova_compute[230183]: 2025-11-23 21:09:37.078 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932176.9960365, 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:09:37 compute-1 nova_compute[230183]: 2025-11-23 21:09:37.079 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] VM Paused (Lifecycle Event)
Nov 23 21:09:37 compute-1 nova_compute[230183]: 2025-11-23 21:09:37.104 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:09:37 compute-1 nova_compute[230183]: 2025-11-23 21:09:37.106 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932176.9999523, 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:09:37 compute-1 nova_compute[230183]: 2025-11-23 21:09:37.107 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] VM Resumed (Lifecycle Event)
Nov 23 21:09:37 compute-1 nova_compute[230183]: 2025-11-23 21:09:37.116 230187 INFO nova.compute.manager [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Took 5.74 seconds to spawn the instance on the hypervisor.
Nov 23 21:09:37 compute-1 nova_compute[230183]: 2025-11-23 21:09:37.116 230187 DEBUG nova.compute.manager [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:09:37 compute-1 nova_compute[230183]: 2025-11-23 21:09:37.118 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:37 compute-1 nova_compute[230183]: 2025-11-23 21:09:37.125 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:09:37 compute-1 nova_compute[230183]: 2025-11-23 21:09:37.128 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 23 21:09:37 compute-1 nova_compute[230183]: 2025-11-23 21:09:37.160 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 23 21:09:37 compute-1 nova_compute[230183]: 2025-11-23 21:09:37.178 230187 INFO nova.compute.manager [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Took 6.58 seconds to build instance.
Nov 23 21:09:37 compute-1 nova_compute[230183]: 2025-11-23 21:09:37.193 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:09:37 compute-1 podman[236460]: 2025-11-23 21:09:37.204042545 +0000 UTC m=+0.047368814 container create 21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 23 21:09:37 compute-1 systemd[1]: Started libpod-conmon-21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5.scope.
Nov 23 21:09:37 compute-1 podman[236460]: 2025-11-23 21:09:37.179152202 +0000 UTC m=+0.022478491 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 21:09:37 compute-1 systemd[1]: Started libcrun container.
Nov 23 21:09:37 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4152db43b6d5104340674417ff7884d350338c590c450707f50593d1fb1c9d99/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 21:09:37 compute-1 podman[236460]: 2025-11-23 21:09:37.295611397 +0000 UTC m=+0.138937696 container init 21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 21:09:37 compute-1 podman[236460]: 2025-11-23 21:09:37.301222257 +0000 UTC m=+0.144548536 container start 21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 23 21:09:37 compute-1 neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89[236475]: [NOTICE]   (236479) : New worker (236481) forked
Nov 23 21:09:37 compute-1 neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89[236475]: [NOTICE]   (236479) : Loading success.
Nov 23 21:09:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:38.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:38 compute-1 ceph-mon[80135]: pgmap v846: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Nov 23 21:09:38 compute-1 nova_compute[230183]: 2025-11-23 21:09:38.827 230187 DEBUG nova.compute.manager [req-b8d9279e-8d35-452a-afad-0a9398019b39 req-f5abba18-b26c-483b-9ea0-dff01db04c81 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Received event network-vif-plugged-540c04be-373c-41ca-adee-2010345a34df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:09:38 compute-1 nova_compute[230183]: 2025-11-23 21:09:38.827 230187 DEBUG oslo_concurrency.lockutils [req-b8d9279e-8d35-452a-afad-0a9398019b39 req-f5abba18-b26c-483b-9ea0-dff01db04c81 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:09:38 compute-1 nova_compute[230183]: 2025-11-23 21:09:38.828 230187 DEBUG oslo_concurrency.lockutils [req-b8d9279e-8d35-452a-afad-0a9398019b39 req-f5abba18-b26c-483b-9ea0-dff01db04c81 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:09:38 compute-1 nova_compute[230183]: 2025-11-23 21:09:38.828 230187 DEBUG oslo_concurrency.lockutils [req-b8d9279e-8d35-452a-afad-0a9398019b39 req-f5abba18-b26c-483b-9ea0-dff01db04c81 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:09:38 compute-1 nova_compute[230183]: 2025-11-23 21:09:38.829 230187 DEBUG nova.compute.manager [req-b8d9279e-8d35-452a-afad-0a9398019b39 req-f5abba18-b26c-483b-9ea0-dff01db04c81 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] No waiting events found dispatching network-vif-plugged-540c04be-373c-41ca-adee-2010345a34df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:09:38 compute-1 nova_compute[230183]: 2025-11-23 21:09:38.829 230187 WARNING nova.compute.manager [req-b8d9279e-8d35-452a-afad-0a9398019b39 req-f5abba18-b26c-483b-9ea0-dff01db04c81 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Received unexpected event network-vif-plugged-540c04be-373c-41ca-adee-2010345a34df for instance with vm_state active and task_state None.
Nov 23 21:09:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:38.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:39 compute-1 ceph-mon[80135]: pgmap v847: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 23 21:09:39 compute-1 ovn_controller[132845]: 2025-11-23T21:09:39Z|00069|binding|INFO|Releasing lport 4bff4598-93d2-442e-90fe-19336d84eb93 from this chassis (sb_readonly=0)
Nov 23 21:09:39 compute-1 nova_compute[230183]: 2025-11-23 21:09:39.756 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:39 compute-1 NetworkManager[49021]: <info>  [1763932179.7585] manager: (patch-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Nov 23 21:09:39 compute-1 NetworkManager[49021]: <info>  [1763932179.7592] manager: (patch-br-int-to-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Nov 23 21:09:39 compute-1 ovn_controller[132845]: 2025-11-23T21:09:39Z|00070|binding|INFO|Releasing lport 4bff4598-93d2-442e-90fe-19336d84eb93 from this chassis (sb_readonly=0)
Nov 23 21:09:39 compute-1 nova_compute[230183]: 2025-11-23 21:09:39.791 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:39 compute-1 nova_compute[230183]: 2025-11-23 21:09:39.795 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:09:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:40.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:09:40 compute-1 nova_compute[230183]: 2025-11-23 21:09:40.505 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:40 compute-1 nova_compute[230183]: 2025-11-23 21:09:40.923 230187 DEBUG nova.compute.manager [req-55def611-d3c7-4406-a3d9-0308faa9c3cb req-169affcb-a493-44da-bc8b-ed2dfde46065 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Received event network-changed-540c04be-373c-41ca-adee-2010345a34df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:09:40 compute-1 nova_compute[230183]: 2025-11-23 21:09:40.923 230187 DEBUG nova.compute.manager [req-55def611-d3c7-4406-a3d9-0308faa9c3cb req-169affcb-a493-44da-bc8b-ed2dfde46065 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Refreshing instance network info cache due to event network-changed-540c04be-373c-41ca-adee-2010345a34df. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 23 21:09:40 compute-1 nova_compute[230183]: 2025-11-23 21:09:40.924 230187 DEBUG oslo_concurrency.lockutils [req-55def611-d3c7-4406-a3d9-0308faa9c3cb req-169affcb-a493-44da-bc8b-ed2dfde46065 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:09:40 compute-1 nova_compute[230183]: 2025-11-23 21:09:40.924 230187 DEBUG oslo_concurrency.lockutils [req-55def611-d3c7-4406-a3d9-0308faa9c3cb req-169affcb-a493-44da-bc8b-ed2dfde46065 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:09:40 compute-1 nova_compute[230183]: 2025-11-23 21:09:40.924 230187 DEBUG nova.network.neutron [req-55def611-d3c7-4406-a3d9-0308faa9c3cb req-169affcb-a493-44da-bc8b-ed2dfde46065 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Refreshing network info cache for port 540c04be-373c-41ca-adee-2010345a34df _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 23 21:09:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:40.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:41 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:09:42 compute-1 nova_compute[230183]: 2025-11-23 21:09:42.120 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:42 compute-1 ceph-mon[80135]: pgmap v848: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Nov 23 21:09:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:09:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:42.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:09:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:09:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:42.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:09:43 compute-1 nova_compute[230183]: 2025-11-23 21:09:43.522 230187 DEBUG nova.network.neutron [req-55def611-d3c7-4406-a3d9-0308faa9c3cb req-169affcb-a493-44da-bc8b-ed2dfde46065 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Updated VIF entry in instance network info cache for port 540c04be-373c-41ca-adee-2010345a34df. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 23 21:09:43 compute-1 nova_compute[230183]: 2025-11-23 21:09:43.522 230187 DEBUG nova.network.neutron [req-55def611-d3c7-4406-a3d9-0308faa9c3cb req-169affcb-a493-44da-bc8b-ed2dfde46065 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Updating instance_info_cache with network_info: [{"id": "540c04be-373c-41ca-adee-2010345a34df", "address": "fa:16:3e:9d:e3:b7", "network": {"id": "6ff6a2ba-50a1-444b-9685-151db9bcac89", "bridge": "br-int", "label": "tempest-network-smoke--285822202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540c04be-37", "ovs_interfaceid": "540c04be-373c-41ca-adee-2010345a34df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:09:43 compute-1 nova_compute[230183]: 2025-11-23 21:09:43.547 230187 DEBUG oslo_concurrency.lockutils [req-55def611-d3c7-4406-a3d9-0308faa9c3cb req-169affcb-a493-44da-bc8b-ed2dfde46065 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:09:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:44.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:44 compute-1 ceph-mon[80135]: pgmap v849: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 23 21:09:44 compute-1 sudo[236495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:09:44 compute-1 sudo[236495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:09:44 compute-1 sudo[236495]: pam_unix(sudo:session): session closed for user root
Nov 23 21:09:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:44.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:45 compute-1 nova_compute[230183]: 2025-11-23 21:09:45.539 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:46.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:46 compute-1 ceph-mon[80135]: pgmap v850: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 23 21:09:46 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:09:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:46.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:47 compute-1 nova_compute[230183]: 2025-11-23 21:09:47.122 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:09:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:48.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:09:48 compute-1 ceph-mon[80135]: pgmap v851: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 23 21:09:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:09:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:09:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:48.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:09:49 compute-1 podman[236524]: 2025-11-23 21:09:49.631073181 +0000 UTC m=+0.046903822 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 23 21:09:49 compute-1 podman[236523]: 2025-11-23 21:09:49.688902503 +0000 UTC m=+0.104350943 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 21:09:50 compute-1 ovn_controller[132845]: 2025-11-23T21:09:50Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9d:e3:b7 10.100.0.11
Nov 23 21:09:50 compute-1 ovn_controller[132845]: 2025-11-23T21:09:50Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:e3:b7 10.100.0.11
Nov 23 21:09:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:09:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:50.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:09:50 compute-1 ceph-mon[80135]: pgmap v852: 337 pgs: 337 active+clean; 93 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 511 KiB/s wr, 92 op/s
Nov 23 21:09:50 compute-1 nova_compute[230183]: 2025-11-23 21:09:50.541 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:50.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:51.066 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:09:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:51.067 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:09:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:09:51.067 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:09:51 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:09:52 compute-1 nova_compute[230183]: 2025-11-23 21:09:52.124 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:09:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:52.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:09:52 compute-1 ceph-mon[80135]: pgmap v853: 337 pgs: 337 active+clean; 109 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 910 KiB/s rd, 2.0 MiB/s wr, 76 op/s
Nov 23 21:09:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:52.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:53 compute-1 podman[236570]: 2025-11-23 21:09:53.664686691 +0000 UTC m=+0.082171152 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 23 21:09:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:54.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:54 compute-1 ceph-mon[80135]: pgmap v854: 337 pgs: 337 active+clean; 109 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Nov 23 21:09:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:09:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:54.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:09:55 compute-1 nova_compute[230183]: 2025-11-23 21:09:55.544 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:56.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:56 compute-1 ceph-mon[80135]: pgmap v855: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 23 21:09:56 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:09:56 compute-1 nova_compute[230183]: 2025-11-23 21:09:56.609 230187 INFO nova.compute.manager [None req-0e564f10-93e4-4ea1-9450-01843133a446 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Get console output
Nov 23 21:09:56 compute-1 nova_compute[230183]: 2025-11-23 21:09:56.615 234120 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 23 21:09:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:09:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:56.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:09:57 compute-1 nova_compute[230183]: 2025-11-23 21:09:57.126 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:09:57 compute-1 nova_compute[230183]: 2025-11-23 21:09:57.428 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:09:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:09:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:58.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:09:58 compute-1 nova_compute[230183]: 2025-11-23 21:09:58.377 230187 DEBUG nova.compute.manager [req-e4b625f9-fa61-4ce7-b6f0-f2f2d6f347fb req-36543de2-0d29-48f1-990e-b68cd29b7f99 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Received event network-changed-540c04be-373c-41ca-adee-2010345a34df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:09:58 compute-1 nova_compute[230183]: 2025-11-23 21:09:58.378 230187 DEBUG nova.compute.manager [req-e4b625f9-fa61-4ce7-b6f0-f2f2d6f347fb req-36543de2-0d29-48f1-990e-b68cd29b7f99 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Refreshing instance network info cache due to event network-changed-540c04be-373c-41ca-adee-2010345a34df. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 23 21:09:58 compute-1 nova_compute[230183]: 2025-11-23 21:09:58.378 230187 DEBUG oslo_concurrency.lockutils [req-e4b625f9-fa61-4ce7-b6f0-f2f2d6f347fb req-36543de2-0d29-48f1-990e-b68cd29b7f99 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:09:58 compute-1 nova_compute[230183]: 2025-11-23 21:09:58.379 230187 DEBUG oslo_concurrency.lockutils [req-e4b625f9-fa61-4ce7-b6f0-f2f2d6f347fb req-36543de2-0d29-48f1-990e-b68cd29b7f99 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:09:58 compute-1 nova_compute[230183]: 2025-11-23 21:09:58.379 230187 DEBUG nova.network.neutron [req-e4b625f9-fa61-4ce7-b6f0-f2f2d6f347fb req-36543de2-0d29-48f1-990e-b68cd29b7f99 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Refreshing network info cache for port 540c04be-373c-41ca-adee-2010345a34df _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 23 21:09:58 compute-1 ceph-mon[80135]: pgmap v856: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 23 21:09:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:09:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:09:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:58.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:09:59 compute-1 nova_compute[230183]: 2025-11-23 21:09:59.331 230187 DEBUG nova.network.neutron [req-e4b625f9-fa61-4ce7-b6f0-f2f2d6f347fb req-36543de2-0d29-48f1-990e-b68cd29b7f99 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Updated VIF entry in instance network info cache for port 540c04be-373c-41ca-adee-2010345a34df. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 23 21:09:59 compute-1 nova_compute[230183]: 2025-11-23 21:09:59.332 230187 DEBUG nova.network.neutron [req-e4b625f9-fa61-4ce7-b6f0-f2f2d6f347fb req-36543de2-0d29-48f1-990e-b68cd29b7f99 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Updating instance_info_cache with network_info: [{"id": "540c04be-373c-41ca-adee-2010345a34df", "address": "fa:16:3e:9d:e3:b7", "network": {"id": "6ff6a2ba-50a1-444b-9685-151db9bcac89", "bridge": "br-int", "label": "tempest-network-smoke--285822202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540c04be-37", "ovs_interfaceid": "540c04be-373c-41ca-adee-2010345a34df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:09:59 compute-1 nova_compute[230183]: 2025-11-23 21:09:59.358 230187 DEBUG oslo_concurrency.lockutils [req-e4b625f9-fa61-4ce7-b6f0-f2f2d6f347fb req-36543de2-0d29-48f1-990e-b68cd29b7f99 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:10:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:00.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:00 compute-1 nova_compute[230183]: 2025-11-23 21:10:00.587 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:00 compute-1 ceph-mon[80135]: pgmap v857: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 23 21:10:00 compute-1 ceph-mon[80135]: overall HEALTH_OK
Nov 23 21:10:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:00.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:01 compute-1 nova_compute[230183]: 2025-11-23 21:10:01.423 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:10:01 compute-1 nova_compute[230183]: 2025-11-23 21:10:01.425 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:10:01 compute-1 nova_compute[230183]: 2025-11-23 21:10:01.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:10:01 compute-1 nova_compute[230183]: 2025-11-23 21:10:01.426 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 23 21:10:01 compute-1 nova_compute[230183]: 2025-11-23 21:10:01.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:10:01 compute-1 nova_compute[230183]: 2025-11-23 21:10:01.448 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:10:01 compute-1 nova_compute[230183]: 2025-11-23 21:10:01.448 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:10:01 compute-1 nova_compute[230183]: 2025-11-23 21:10:01.449 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:10:01 compute-1 nova_compute[230183]: 2025-11-23 21:10:01.449 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:10:01 compute-1 nova_compute[230183]: 2025-11-23 21:10:01.450 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:10:01 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:10:01 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:10:01 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2877369591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:10:01 compute-1 nova_compute[230183]: 2025-11-23 21:10:01.891 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:10:01 compute-1 nova_compute[230183]: 2025-11-23 21:10:01.958 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 23 21:10:01 compute-1 nova_compute[230183]: 2025-11-23 21:10:01.958 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 23 21:10:02 compute-1 nova_compute[230183]: 2025-11-23 21:10:02.101 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:10:02 compute-1 nova_compute[230183]: 2025-11-23 21:10:02.102 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4752MB free_disk=59.942752838134766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:10:02 compute-1 nova_compute[230183]: 2025-11-23 21:10:02.102 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:10:02 compute-1 nova_compute[230183]: 2025-11-23 21:10:02.103 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:10:02 compute-1 nova_compute[230183]: 2025-11-23 21:10:02.129 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:02 compute-1 nova_compute[230183]: 2025-11-23 21:10:02.170 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Instance 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 23 21:10:02 compute-1 nova_compute[230183]: 2025-11-23 21:10:02.170 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:10:02 compute-1 nova_compute[230183]: 2025-11-23 21:10:02.170 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:10:02 compute-1 nova_compute[230183]: 2025-11-23 21:10:02.204 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:10:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:10:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:02.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:10:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:10:02 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4012904158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:10:02 compute-1 nova_compute[230183]: 2025-11-23 21:10:02.631 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:10:02 compute-1 nova_compute[230183]: 2025-11-23 21:10:02.636 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:10:02 compute-1 ceph-mon[80135]: pgmap v858: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 257 KiB/s rd, 1.7 MiB/s wr, 47 op/s
Nov 23 21:10:02 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2877369591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:10:02 compute-1 nova_compute[230183]: 2025-11-23 21:10:02.652 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:10:02 compute-1 nova_compute[230183]: 2025-11-23 21:10:02.669 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 23 21:10:02 compute-1 nova_compute[230183]: 2025-11-23 21:10:02.669 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:10:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:02.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:03 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/4012904158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:10:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:10:03 compute-1 nova_compute[230183]: 2025-11-23 21:10:03.670 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:10:03 compute-1 nova_compute[230183]: 2025-11-23 21:10:03.670 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 23 21:10:03 compute-1 nova_compute[230183]: 2025-11-23 21:10:03.670 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 23 21:10:03 compute-1 nova_compute[230183]: 2025-11-23 21:10:03.785 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:10:03 compute-1 nova_compute[230183]: 2025-11-23 21:10:03.785 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquired lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:10:03 compute-1 nova_compute[230183]: 2025-11-23 21:10:03.786 230187 DEBUG nova.network.neutron [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 23 21:10:03 compute-1 nova_compute[230183]: 2025-11-23 21:10:03.786 230187 DEBUG nova.objects.instance [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lazy-loading 'info_cache' on Instance uuid 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:10:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:04.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:04 compute-1 sudo[236642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:10:04 compute-1 sudo[236642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:10:04 compute-1 sudo[236642]: pam_unix(sudo:session): session closed for user root
Nov 23 21:10:04 compute-1 ceph-mon[80135]: pgmap v859: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 108 KiB/s wr, 19 op/s
Nov 23 21:10:04 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1813209308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:10:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:10:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:04.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:10:05 compute-1 nova_compute[230183]: 2025-11-23 21:10:05.590 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:10:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:06.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:10:06 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:10:06 compute-1 ceph-mon[80135]: pgmap v860: 337 pgs: 337 active+clean; 129 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 442 KiB/s wr, 30 op/s
Nov 23 21:10:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2056082769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:10:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 21:10:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:06.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 21:10:07 compute-1 nova_compute[230183]: 2025-11-23 21:10:07.130 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:07 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2133734318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:10:07 compute-1 nova_compute[230183]: 2025-11-23 21:10:07.798 230187 DEBUG nova.network.neutron [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Updating instance_info_cache with network_info: [{"id": "540c04be-373c-41ca-adee-2010345a34df", "address": "fa:16:3e:9d:e3:b7", "network": {"id": "6ff6a2ba-50a1-444b-9685-151db9bcac89", "bridge": "br-int", "label": "tempest-network-smoke--285822202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540c04be-37", "ovs_interfaceid": "540c04be-373c-41ca-adee-2010345a34df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:10:07 compute-1 nova_compute[230183]: 2025-11-23 21:10:07.826 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Releasing lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:10:07 compute-1 nova_compute[230183]: 2025-11-23 21:10:07.826 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 23 21:10:07 compute-1 nova_compute[230183]: 2025-11-23 21:10:07.827 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:10:07 compute-1 nova_compute[230183]: 2025-11-23 21:10:07.828 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:10:07 compute-1 nova_compute[230183]: 2025-11-23 21:10:07.828 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:10:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:08.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:08 compute-1 ceph-mon[80135]: pgmap v861: 337 pgs: 337 active+clean; 129 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 347 KiB/s wr, 12 op/s
Nov 23 21:10:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/1797927620' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:10:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/1797927620' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:10:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:09.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:09 compute-1 nova_compute[230183]: 2025-11-23 21:10:09.582 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:10:10 compute-1 ceph-mon[80135]: pgmap v862: 337 pgs: 337 active+clean; 142 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 982 KiB/s wr, 14 op/s
Nov 23 21:10:10 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2512106924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:10:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:10.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:10 compute-1 nova_compute[230183]: 2025-11-23 21:10:10.643 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:11.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:11 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/645707027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:10:11 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1538673241' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:10:11 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:10:12 compute-1 nova_compute[230183]: 2025-11-23 21:10:12.131 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:12 compute-1 ceph-mon[80135]: pgmap v863: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 21:10:12 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/132932692' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:10:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:12.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:13.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:14 compute-1 ceph-mon[80135]: pgmap v864: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 21:10:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:14.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:15.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:15 compute-1 nova_compute[230183]: 2025-11-23 21:10:15.645 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:16 compute-1 ceph-mon[80135]: pgmap v865: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 545 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 23 21:10:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:16.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:16 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:10:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:17.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:17 compute-1 nova_compute[230183]: 2025-11-23 21:10:17.133 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:18.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:18 compute-1 ceph-mon[80135]: pgmap v866: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 539 KiB/s rd, 1.5 MiB/s wr, 44 op/s
Nov 23 21:10:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:10:18 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:10:18.394 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:10:18 compute-1 nova_compute[230183]: 2025-11-23 21:10:18.395 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:18 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:10:18.396 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 23 21:10:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:19.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:20.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:20 compute-1 ceph-mon[80135]: pgmap v867: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.5 MiB/s wr, 63 op/s
Nov 23 21:10:20 compute-1 podman[236676]: 2025-11-23 21:10:20.43970292 +0000 UTC m=+0.058717017 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 23 21:10:20 compute-1 podman[236675]: 2025-11-23 21:10:20.478668779 +0000 UTC m=+0.097161882 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 21:10:20 compute-1 nova_compute[230183]: 2025-11-23 21:10:20.647 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:21.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:21 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:10:22 compute-1 nova_compute[230183]: 2025-11-23 21:10:22.134 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:22.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:22 compute-1 ceph-mon[80135]: pgmap v868: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 862 KiB/s wr, 89 op/s
Nov 23 21:10:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:23.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:10:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:24.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:10:24 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:10:24.398 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:10:24 compute-1 ceph-mon[80135]: pgmap v869: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 23 21:10:24 compute-1 podman[236723]: 2025-11-23 21:10:24.667662406 +0000 UTC m=+0.084572407 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 23 21:10:24 compute-1 sudo[236743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:10:24 compute-1 sudo[236743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:10:24 compute-1 sudo[236743]: pam_unix(sudo:session): session closed for user root
Nov 23 21:10:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:10:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:25.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:10:25 compute-1 nova_compute[230183]: 2025-11-23 21:10:25.651 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:26.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:26 compute-1 ceph-mon[80135]: pgmap v870: 337 pgs: 337 active+clean; 176 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 617 KiB/s wr, 85 op/s
Nov 23 21:10:26 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:10:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:27.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:27 compute-1 nova_compute[230183]: 2025-11-23 21:10:27.135 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:28.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:28 compute-1 ceph-mon[80135]: pgmap v871: 337 pgs: 337 active+clean; 176 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 604 KiB/s wr, 57 op/s
Nov 23 21:10:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:29.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:30.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:30 compute-1 ceph-mon[80135]: pgmap v872: 337 pgs: 337 active+clean; 181 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 916 KiB/s wr, 66 op/s
Nov 23 21:10:30 compute-1 nova_compute[230183]: 2025-11-23 21:10:30.654 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:10:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:31.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:10:31 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:10:32 compute-1 nova_compute[230183]: 2025-11-23 21:10:32.137 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:32.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:32 compute-1 ceph-mon[80135]: pgmap v873: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 23 21:10:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:10:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:33.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:10:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:10:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:10:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:34.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:10:34 compute-1 ceph-mon[80135]: pgmap v874: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 23 21:10:35 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 21:10:35 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 11K writes, 43K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 3004 syncs, 3.82 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2301 writes, 7858 keys, 2301 commit groups, 1.0 writes per commit group, ingest: 8.46 MB, 0.01 MB/s
                                           Interval WAL: 2301 writes, 911 syncs, 2.53 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 21:10:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:35.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:35 compute-1 sshd-session[236774]: Invalid user solana from 161.35.133.66 port 38912
Nov 23 21:10:35 compute-1 sshd-session[236774]: Connection closed by invalid user solana 161.35.133.66 port 38912 [preauth]
Nov 23 21:10:35 compute-1 nova_compute[230183]: 2025-11-23 21:10:35.656 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:36 compute-1 ceph-mon[80135]: pgmap v875: 337 pgs: 337 active+clean; 181 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Nov 23 21:10:36 compute-1 sudo[236777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:10:36 compute-1 sudo[236777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:10:36 compute-1 sudo[236777]: pam_unix(sudo:session): session closed for user root
Nov 23 21:10:36 compute-1 sudo[236802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Nov 23 21:10:36 compute-1 sudo[236802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:10:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:36.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:37 compute-1 nova_compute[230183]: 2025-11-23 21:10:37.244 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:10:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:37.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:10:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:10:37 compute-1 sudo[236802]: pam_unix(sudo:session): session closed for user root
Nov 23 21:10:37 compute-1 sudo[236846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:10:37 compute-1 sudo[236846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:10:37 compute-1 sudo[236846]: pam_unix(sudo:session): session closed for user root
Nov 23 21:10:37 compute-1 sudo[236871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 21:10:37 compute-1 sudo[236871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:10:38 compute-1 sudo[236871]: pam_unix(sudo:session): session closed for user root
Nov 23 21:10:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:38.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:38 compute-1 ceph-mon[80135]: pgmap v876: 337 pgs: 337 active+clean; 181 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 317 KiB/s rd, 1.6 MiB/s wr, 61 op/s
Nov 23 21:10:38 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:10:38 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:10:38 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:10:38 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:10:38 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/788483409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:10:38 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:10:38 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 21:10:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:39.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:39 compute-1 nova_compute[230183]: 2025-11-23 21:10:39.521 230187 DEBUG oslo_concurrency.lockutils [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:10:39 compute-1 nova_compute[230183]: 2025-11-23 21:10:39.522 230187 DEBUG oslo_concurrency.lockutils [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:10:39 compute-1 nova_compute[230183]: 2025-11-23 21:10:39.522 230187 DEBUG oslo_concurrency.lockutils [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:10:39 compute-1 nova_compute[230183]: 2025-11-23 21:10:39.522 230187 DEBUG oslo_concurrency.lockutils [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:10:39 compute-1 nova_compute[230183]: 2025-11-23 21:10:39.522 230187 DEBUG oslo_concurrency.lockutils [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:10:39 compute-1 nova_compute[230183]: 2025-11-23 21:10:39.523 230187 INFO nova.compute.manager [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Terminating instance
Nov 23 21:10:39 compute-1 nova_compute[230183]: 2025-11-23 21:10:39.524 230187 DEBUG nova.compute.manager [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 23 21:10:39 compute-1 kernel: tap540c04be-37 (unregistering): left promiscuous mode
Nov 23 21:10:39 compute-1 NetworkManager[49021]: <info>  [1763932239.5746] device (tap540c04be-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 23 21:10:39 compute-1 ovn_controller[132845]: 2025-11-23T21:10:39Z|00071|binding|INFO|Releasing lport 540c04be-373c-41ca-adee-2010345a34df from this chassis (sb_readonly=0)
Nov 23 21:10:39 compute-1 ovn_controller[132845]: 2025-11-23T21:10:39Z|00072|binding|INFO|Setting lport 540c04be-373c-41ca-adee-2010345a34df down in Southbound
Nov 23 21:10:39 compute-1 nova_compute[230183]: 2025-11-23 21:10:39.584 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:39 compute-1 ovn_controller[132845]: 2025-11-23T21:10:39Z|00073|binding|INFO|Removing iface tap540c04be-37 ovn-installed in OVS
Nov 23 21:10:39 compute-1 nova_compute[230183]: 2025-11-23 21:10:39.586 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.595 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:e3:b7 10.100.0.11'], port_security=['fa:16:3e:9d:e3:b7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '227fff00-2bf2-4d7a-9ee7-ff4eaddc0880', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ff6a2ba-50a1-444b-9685-151db9bcac89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '20b5a6ce-6e21-4158-a0ab-eaca16146e81', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c0604ff-606a-413a-88a2-c316eba90e56, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=540c04be-373c-41ca-adee-2010345a34df) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:10:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.596 142158 INFO neutron.agent.ovn.metadata.agent [-] Port 540c04be-373c-41ca-adee-2010345a34df in datapath 6ff6a2ba-50a1-444b-9685-151db9bcac89 unbound from our chassis
Nov 23 21:10:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.597 142158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6ff6a2ba-50a1-444b-9685-151db9bcac89, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 23 21:10:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.598 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[b9dd43d2-b990-4b4e-b80a-eb47dadbabfe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:10:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.598 142158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89 namespace which is not needed anymore
Nov 23 21:10:39 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:10:39 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:10:39 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 21:10:39 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 21:10:39 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:10:39 compute-1 nova_compute[230183]: 2025-11-23 21:10:39.602 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:39 compute-1 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Deactivated successfully.
Nov 23 21:10:39 compute-1 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Consumed 15.742s CPU time.
Nov 23 21:10:39 compute-1 systemd-machined[193469]: Machine qemu-3-instance-00000004 terminated.
Nov 23 21:10:39 compute-1 neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89[236475]: [NOTICE]   (236479) : haproxy version is 2.8.14-c23fe91
Nov 23 21:10:39 compute-1 neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89[236475]: [NOTICE]   (236479) : path to executable is /usr/sbin/haproxy
Nov 23 21:10:39 compute-1 neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89[236475]: [WARNING]  (236479) : Exiting Master process...
Nov 23 21:10:39 compute-1 neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89[236475]: [WARNING]  (236479) : Exiting Master process...
Nov 23 21:10:39 compute-1 neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89[236475]: [ALERT]    (236479) : Current worker (236481) exited with code 143 (Terminated)
Nov 23 21:10:39 compute-1 neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89[236475]: [WARNING]  (236479) : All workers exited. Exiting... (0)
Nov 23 21:10:39 compute-1 systemd[1]: libpod-21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5.scope: Deactivated successfully.
Nov 23 21:10:39 compute-1 podman[236953]: 2025-11-23 21:10:39.729403934 +0000 UTC m=+0.045952367 container died 21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 21:10:39 compute-1 nova_compute[230183]: 2025-11-23 21:10:39.756 230187 INFO nova.virt.libvirt.driver [-] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Instance destroyed successfully.
Nov 23 21:10:39 compute-1 nova_compute[230183]: 2025-11-23 21:10:39.758 230187 DEBUG nova.objects.instance [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'resources' on Instance uuid 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:10:39 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5-userdata-shm.mount: Deactivated successfully.
Nov 23 21:10:39 compute-1 systemd[1]: var-lib-containers-storage-overlay-4152db43b6d5104340674417ff7884d350338c590c450707f50593d1fb1c9d99-merged.mount: Deactivated successfully.
Nov 23 21:10:39 compute-1 podman[236953]: 2025-11-23 21:10:39.78097867 +0000 UTC m=+0.097527043 container cleanup 21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 21:10:39 compute-1 nova_compute[230183]: 2025-11-23 21:10:39.784 230187 DEBUG nova.virt.libvirt.vif [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-23T21:09:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-626843533',display_name='tempest-TestNetworkBasicOps-server-626843533',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-626843533',id=4,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBzFKgfz1QVXAYBgw9WYLDmImQIyNZIUJvYaUSeZsmfvEoA7CUytAymkLL0tqBwm8cJVrzUl6E9R6D/qdooFrc51SiAGOyjiHvRBM9c3gaFOzuWbTw1Aa3lZ7MmCQiSUEQ==',key_name='tempest-TestNetworkBasicOps-1952591884',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:09:37Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-mabh37mo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:09:37Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=227fff00-2bf2-4d7a-9ee7-ff4eaddc0880,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "540c04be-373c-41ca-adee-2010345a34df", "address": "fa:16:3e:9d:e3:b7", "network": {"id": "6ff6a2ba-50a1-444b-9685-151db9bcac89", "bridge": "br-int", "label": "tempest-network-smoke--285822202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540c04be-37", "ovs_interfaceid": "540c04be-373c-41ca-adee-2010345a34df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 23 21:10:39 compute-1 nova_compute[230183]: 2025-11-23 21:10:39.785 230187 DEBUG nova.network.os_vif_util [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "540c04be-373c-41ca-adee-2010345a34df", "address": "fa:16:3e:9d:e3:b7", "network": {"id": "6ff6a2ba-50a1-444b-9685-151db9bcac89", "bridge": "br-int", "label": "tempest-network-smoke--285822202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540c04be-37", "ovs_interfaceid": "540c04be-373c-41ca-adee-2010345a34df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:10:39 compute-1 nova_compute[230183]: 2025-11-23 21:10:39.785 230187 DEBUG nova.network.os_vif_util [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:e3:b7,bridge_name='br-int',has_traffic_filtering=True,id=540c04be-373c-41ca-adee-2010345a34df,network=Network(6ff6a2ba-50a1-444b-9685-151db9bcac89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap540c04be-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:10:39 compute-1 nova_compute[230183]: 2025-11-23 21:10:39.786 230187 DEBUG os_vif [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:e3:b7,bridge_name='br-int',has_traffic_filtering=True,id=540c04be-373c-41ca-adee-2010345a34df,network=Network(6ff6a2ba-50a1-444b-9685-151db9bcac89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap540c04be-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 23 21:10:39 compute-1 nova_compute[230183]: 2025-11-23 21:10:39.787 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:39 compute-1 nova_compute[230183]: 2025-11-23 21:10:39.787 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap540c04be-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:10:39 compute-1 nova_compute[230183]: 2025-11-23 21:10:39.788 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:39 compute-1 systemd[1]: libpod-conmon-21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5.scope: Deactivated successfully.
Nov 23 21:10:39 compute-1 nova_compute[230183]: 2025-11-23 21:10:39.789 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:10:39 compute-1 nova_compute[230183]: 2025-11-23 21:10:39.790 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:39 compute-1 nova_compute[230183]: 2025-11-23 21:10:39.793 230187 INFO os_vif [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:e3:b7,bridge_name='br-int',has_traffic_filtering=True,id=540c04be-373c-41ca-adee-2010345a34df,network=Network(6ff6a2ba-50a1-444b-9685-151db9bcac89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap540c04be-37')
Nov 23 21:10:39 compute-1 podman[236991]: 2025-11-23 21:10:39.844196307 +0000 UTC m=+0.039453214 container remove 21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 23 21:10:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.849 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[81009a2f-8b2f-4df3-a7ca-62bca3a33332]: (4, ('Sun Nov 23 09:10:39 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89 (21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5)\n21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5\nSun Nov 23 09:10:39 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89 (21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5)\n21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:10:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.850 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[9b63c37f-4d60-408b-9ec1-2c4c6f52e2f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:10:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.851 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ff6a2ba-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:10:39 compute-1 nova_compute[230183]: 2025-11-23 21:10:39.852 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:39 compute-1 kernel: tap6ff6a2ba-50: left promiscuous mode
Nov 23 21:10:39 compute-1 nova_compute[230183]: 2025-11-23 21:10:39.866 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.868 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[dbcfc70d-0e22-4104-8ca0-2b4d4373395e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:10:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.880 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[dc873c0c-29f2-4828-956b-ed970aee6523]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:10:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.881 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[7b60759d-04bc-47e3-8784-72017acfc9a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:10:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.893 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[8142c5d0-22fd-44f1-a6f6-c9047426f772]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 412518, 'reachable_time': 44784, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237024, 'error': None, 'target': 'ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:10:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.895 142272 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 23 21:10:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.895 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[bd59d1ee-30df-4f61-856a-b2325a29b228]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:10:39 compute-1 systemd[1]: run-netns-ovnmeta\x2d6ff6a2ba\x2d50a1\x2d444b\x2d9685\x2d151db9bcac89.mount: Deactivated successfully.
Nov 23 21:10:40 compute-1 nova_compute[230183]: 2025-11-23 21:10:40.245 230187 INFO nova.virt.libvirt.driver [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Deleting instance files /var/lib/nova/instances/227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_del
Nov 23 21:10:40 compute-1 nova_compute[230183]: 2025-11-23 21:10:40.246 230187 INFO nova.virt.libvirt.driver [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Deletion of /var/lib/nova/instances/227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_del complete
Nov 23 21:10:40 compute-1 nova_compute[230183]: 2025-11-23 21:10:40.346 230187 INFO nova.compute.manager [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Took 0.82 seconds to destroy the instance on the hypervisor.
Nov 23 21:10:40 compute-1 nova_compute[230183]: 2025-11-23 21:10:40.346 230187 DEBUG oslo.service.loopingcall [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 23 21:10:40 compute-1 nova_compute[230183]: 2025-11-23 21:10:40.347 230187 DEBUG nova.compute.manager [-] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 23 21:10:40 compute-1 nova_compute[230183]: 2025-11-23 21:10:40.347 230187 DEBUG nova.network.neutron [-] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 23 21:10:40 compute-1 nova_compute[230183]: 2025-11-23 21:10:40.352 230187 DEBUG nova.compute.manager [req-04a716ff-99bb-4a83-a119-8ab16e95fee1 req-3d25a8a4-e174-42ff-9ade-610bd5825b92 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Received event network-vif-unplugged-540c04be-373c-41ca-adee-2010345a34df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:10:40 compute-1 nova_compute[230183]: 2025-11-23 21:10:40.352 230187 DEBUG oslo_concurrency.lockutils [req-04a716ff-99bb-4a83-a119-8ab16e95fee1 req-3d25a8a4-e174-42ff-9ade-610bd5825b92 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:10:40 compute-1 nova_compute[230183]: 2025-11-23 21:10:40.353 230187 DEBUG oslo_concurrency.lockutils [req-04a716ff-99bb-4a83-a119-8ab16e95fee1 req-3d25a8a4-e174-42ff-9ade-610bd5825b92 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:10:40 compute-1 nova_compute[230183]: 2025-11-23 21:10:40.353 230187 DEBUG oslo_concurrency.lockutils [req-04a716ff-99bb-4a83-a119-8ab16e95fee1 req-3d25a8a4-e174-42ff-9ade-610bd5825b92 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:10:40 compute-1 nova_compute[230183]: 2025-11-23 21:10:40.353 230187 DEBUG nova.compute.manager [req-04a716ff-99bb-4a83-a119-8ab16e95fee1 req-3d25a8a4-e174-42ff-9ade-610bd5825b92 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] No waiting events found dispatching network-vif-unplugged-540c04be-373c-41ca-adee-2010345a34df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:10:40 compute-1 nova_compute[230183]: 2025-11-23 21:10:40.353 230187 DEBUG nova.compute.manager [req-04a716ff-99bb-4a83-a119-8ab16e95fee1 req-3d25a8a4-e174-42ff-9ade-610bd5825b92 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Received event network-vif-unplugged-540c04be-373c-41ca-adee-2010345a34df for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 23 21:10:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:40.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:41 compute-1 ceph-mon[80135]: pgmap v877: 337 pgs: 337 active+clean; 163 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 317 KiB/s rd, 1.6 MiB/s wr, 62 op/s
Nov 23 21:10:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:41.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:42 compute-1 ceph-mon[80135]: pgmap v878: 337 pgs: 337 active+clean; 96 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 285 KiB/s rd, 1.3 MiB/s wr, 75 op/s
Nov 23 21:10:42 compute-1 nova_compute[230183]: 2025-11-23 21:10:42.247 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:10:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:42.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:43.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:44 compute-1 sudo[237028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 21:10:44 compute-1 sudo[237028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:10:44 compute-1 sudo[237028]: pam_unix(sudo:session): session closed for user root
Nov 23 21:10:44 compute-1 nova_compute[230183]: 2025-11-23 21:10:44.299 230187 DEBUG nova.compute.manager [req-dc4e1632-79dd-4f3a-9ff8-cebabc78a014 req-788fb01a-3fa9-4773-b5fe-2fa1157a5e0e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Received event network-vif-plugged-540c04be-373c-41ca-adee-2010345a34df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:10:44 compute-1 nova_compute[230183]: 2025-11-23 21:10:44.299 230187 DEBUG oslo_concurrency.lockutils [req-dc4e1632-79dd-4f3a-9ff8-cebabc78a014 req-788fb01a-3fa9-4773-b5fe-2fa1157a5e0e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:10:44 compute-1 nova_compute[230183]: 2025-11-23 21:10:44.300 230187 DEBUG oslo_concurrency.lockutils [req-dc4e1632-79dd-4f3a-9ff8-cebabc78a014 req-788fb01a-3fa9-4773-b5fe-2fa1157a5e0e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:10:44 compute-1 nova_compute[230183]: 2025-11-23 21:10:44.300 230187 DEBUG oslo_concurrency.lockutils [req-dc4e1632-79dd-4f3a-9ff8-cebabc78a014 req-788fb01a-3fa9-4773-b5fe-2fa1157a5e0e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:10:44 compute-1 nova_compute[230183]: 2025-11-23 21:10:44.300 230187 DEBUG nova.compute.manager [req-dc4e1632-79dd-4f3a-9ff8-cebabc78a014 req-788fb01a-3fa9-4773-b5fe-2fa1157a5e0e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] No waiting events found dispatching network-vif-plugged-540c04be-373c-41ca-adee-2010345a34df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:10:44 compute-1 nova_compute[230183]: 2025-11-23 21:10:44.300 230187 WARNING nova.compute.manager [req-dc4e1632-79dd-4f3a-9ff8-cebabc78a014 req-788fb01a-3fa9-4773-b5fe-2fa1157a5e0e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Received unexpected event network-vif-plugged-540c04be-373c-41ca-adee-2010345a34df for instance with vm_state active and task_state deleting.
Nov 23 21:10:44 compute-1 ceph-mon[80135]: pgmap v879: 337 pgs: 337 active+clean; 96 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 14 KiB/s wr, 30 op/s
Nov 23 21:10:44 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:10:44 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:10:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:10:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:44.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:10:44 compute-1 nova_compute[230183]: 2025-11-23 21:10:44.821 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:44 compute-1 sudo[237053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:10:44 compute-1 sudo[237053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:10:44 compute-1 sudo[237053]: pam_unix(sudo:session): session closed for user root
Nov 23 21:10:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:45.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:45 compute-1 nova_compute[230183]: 2025-11-23 21:10:45.418 230187 DEBUG nova.network.neutron [-] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:10:45 compute-1 nova_compute[230183]: 2025-11-23 21:10:45.434 230187 INFO nova.compute.manager [-] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Took 5.09 seconds to deallocate network for instance.
Nov 23 21:10:45 compute-1 nova_compute[230183]: 2025-11-23 21:10:45.473 230187 DEBUG oslo_concurrency.lockutils [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:10:45 compute-1 nova_compute[230183]: 2025-11-23 21:10:45.474 230187 DEBUG oslo_concurrency.lockutils [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:10:45 compute-1 nova_compute[230183]: 2025-11-23 21:10:45.487 230187 DEBUG nova.compute.manager [req-49783259-4253-47e5-af24-a0641ab302dc req-a7315484-9869-4f92-8b32-fe39f240c204 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Received event network-vif-deleted-540c04be-373c-41ca-adee-2010345a34df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:10:45 compute-1 nova_compute[230183]: 2025-11-23 21:10:45.546 230187 DEBUG oslo_concurrency.processutils [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:10:45 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:10:45 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3590546491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:10:45 compute-1 nova_compute[230183]: 2025-11-23 21:10:45.977 230187 DEBUG oslo_concurrency.processutils [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:10:45 compute-1 nova_compute[230183]: 2025-11-23 21:10:45.983 230187 DEBUG nova.compute.provider_tree [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:10:46 compute-1 nova_compute[230183]: 2025-11-23 21:10:46.007 230187 DEBUG nova.scheduler.client.report [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:10:46 compute-1 nova_compute[230183]: 2025-11-23 21:10:46.028 230187 DEBUG oslo_concurrency.lockutils [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:10:46 compute-1 nova_compute[230183]: 2025-11-23 21:10:46.072 230187 INFO nova.scheduler.client.report [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Deleted allocations for instance 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880
Nov 23 21:10:46 compute-1 nova_compute[230183]: 2025-11-23 21:10:46.173 230187 DEBUG oslo_concurrency.lockutils [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:10:46 compute-1 ceph-mon[80135]: pgmap v880: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 15 KiB/s wr, 56 op/s
Nov 23 21:10:46 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3590546491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:10:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:46.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:10:47 compute-1 nova_compute[230183]: 2025-11-23 21:10:47.249 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:47.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:48 compute-1 ceph-mon[80135]: pgmap v881: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.7 KiB/s wr, 48 op/s
Nov 23 21:10:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:10:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:48.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:10:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:49.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:10:49 compute-1 nova_compute[230183]: 2025-11-23 21:10:49.824 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:50.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:50 compute-1 ceph-mon[80135]: pgmap v882: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.7 KiB/s wr, 48 op/s
Nov 23 21:10:50 compute-1 podman[237104]: 2025-11-23 21:10:50.660844111 +0000 UTC m=+0.065984631 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 23 21:10:50 compute-1 podman[237103]: 2025-11-23 21:10:50.69564274 +0000 UTC m=+0.099722071 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 23 21:10:50 compute-1 nova_compute[230183]: 2025-11-23 21:10:50.770 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:50 compute-1 nova_compute[230183]: 2025-11-23 21:10:50.887 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:10:51.067 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:10:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:10:51.067 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:10:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:10:51.067 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:10:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:51.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:52 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:10:52 compute-1 nova_compute[230183]: 2025-11-23 21:10:52.251 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:52.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:52 compute-1 ceph-mon[80135]: pgmap v883: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.7 KiB/s wr, 47 op/s
Nov 23 21:10:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:53.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:10:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:54.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:10:54 compute-1 ceph-mon[80135]: pgmap v884: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 938 B/s wr, 26 op/s
Nov 23 21:10:54 compute-1 nova_compute[230183]: 2025-11-23 21:10:54.755 230187 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763932239.7533598, 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:10:54 compute-1 nova_compute[230183]: 2025-11-23 21:10:54.755 230187 INFO nova.compute.manager [-] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] VM Stopped (Lifecycle Event)
Nov 23 21:10:54 compute-1 nova_compute[230183]: 2025-11-23 21:10:54.771 230187 DEBUG nova.compute.manager [None req-6c57b200-e3f2-40af-a01c-edaec737378d - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:10:54 compute-1 nova_compute[230183]: 2025-11-23 21:10:54.826 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:55.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:55 compute-1 podman[237150]: 2025-11-23 21:10:55.656250612 +0000 UTC m=+0.062060206 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 21:10:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:56.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:56 compute-1 ceph-mon[80135]: pgmap v885: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 938 B/s wr, 26 op/s
Nov 23 21:10:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:10:57 compute-1 nova_compute[230183]: 2025-11-23 21:10:57.252 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:10:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:10:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:57.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:10:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:58.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:58 compute-1 nova_compute[230183]: 2025-11-23 21:10:58.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:10:58 compute-1 ceph-mon[80135]: pgmap v886: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Nov 23 21:10:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:10:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:10:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:59.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:10:59 compute-1 nova_compute[230183]: 2025-11-23 21:10:59.829 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:00.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:00 compute-1 ceph-mon[80135]: pgmap v887: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 21:11:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:01.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:01 compute-1 nova_compute[230183]: 2025-11-23 21:11:01.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:11:01 compute-1 nova_compute[230183]: 2025-11-23 21:11:01.448 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:11:01 compute-1 nova_compute[230183]: 2025-11-23 21:11:01.449 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:11:01 compute-1 nova_compute[230183]: 2025-11-23 21:11:01.449 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:11:01 compute-1 nova_compute[230183]: 2025-11-23 21:11:01.449 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:11:01 compute-1 nova_compute[230183]: 2025-11-23 21:11:01.449 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:11:01 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:11:01 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2370529717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:11:01 compute-1 nova_compute[230183]: 2025-11-23 21:11:01.905 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:11:02 compute-1 nova_compute[230183]: 2025-11-23 21:11:02.046 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:11:02 compute-1 nova_compute[230183]: 2025-11-23 21:11:02.048 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4932MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:11:02 compute-1 nova_compute[230183]: 2025-11-23 21:11:02.048 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:11:02 compute-1 nova_compute[230183]: 2025-11-23 21:11:02.048 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:11:02 compute-1 nova_compute[230183]: 2025-11-23 21:11:02.100 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:11:02 compute-1 nova_compute[230183]: 2025-11-23 21:11:02.100 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:11:02 compute-1 nova_compute[230183]: 2025-11-23 21:11:02.119 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:11:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:11:02 compute-1 nova_compute[230183]: 2025-11-23 21:11:02.256 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:11:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:02.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:11:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:11:02 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/413225064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:11:02 compute-1 nova_compute[230183]: 2025-11-23 21:11:02.565 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:11:02 compute-1 nova_compute[230183]: 2025-11-23 21:11:02.570 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:11:02 compute-1 nova_compute[230183]: 2025-11-23 21:11:02.582 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:11:02 compute-1 ceph-mon[80135]: pgmap v888: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Nov 23 21:11:02 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2370529717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:11:02 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/413225064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:11:02 compute-1 nova_compute[230183]: 2025-11-23 21:11:02.608 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 23 21:11:02 compute-1 nova_compute[230183]: 2025-11-23 21:11:02.608 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:11:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:11:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:03.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:11:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:11:03 compute-1 nova_compute[230183]: 2025-11-23 21:11:03.604 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:11:03 compute-1 nova_compute[230183]: 2025-11-23 21:11:03.605 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:11:03 compute-1 nova_compute[230183]: 2025-11-23 21:11:03.605 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 23 21:11:03 compute-1 nova_compute[230183]: 2025-11-23 21:11:03.605 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 23 21:11:03 compute-1 nova_compute[230183]: 2025-11-23 21:11:03.624 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 23 21:11:03 compute-1 nova_compute[230183]: 2025-11-23 21:11:03.624 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:11:03 compute-1 nova_compute[230183]: 2025-11-23 21:11:03.624 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:11:03 compute-1 nova_compute[230183]: 2025-11-23 21:11:03.625 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:11:03 compute-1 nova_compute[230183]: 2025-11-23 21:11:03.625 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:11:03 compute-1 nova_compute[230183]: 2025-11-23 21:11:03.625 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 23 21:11:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:11:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:04.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:11:04 compute-1 nova_compute[230183]: 2025-11-23 21:11:04.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:11:04 compute-1 ceph-mon[80135]: pgmap v889: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Nov 23 21:11:04 compute-1 nova_compute[230183]: 2025-11-23 21:11:04.832 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:04 compute-1 sudo[237220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:11:04 compute-1 sudo[237220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:11:04 compute-1 sudo[237220]: pam_unix(sudo:session): session closed for user root
Nov 23 21:11:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:05.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:06 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 21:11:06 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5424 writes, 28K keys, 5424 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.04 MB/s
                                           Cumulative WAL: 5424 writes, 5424 syncs, 1.00 writes per sync, written: 0.07 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1496 writes, 6941 keys, 1496 commit groups, 1.0 writes per commit group, ingest: 16.65 MB, 0.03 MB/s
                                           Interval WAL: 1496 writes, 1496 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     56.6      0.72              0.10        14    0.051       0      0       0.0       0.0
                                             L6      1/0   12.44 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.2     86.3     74.0      2.29              0.45        13    0.176     67K   6874       0.0       0.0
                                            Sum      1/0   12.44 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.2     65.7     69.8      3.01              0.55        27    0.111     67K   6874       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.3     58.6     58.2      1.03              0.17         8    0.129     23K   2050       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     86.3     74.0      2.29              0.45        13    0.176     67K   6874       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     56.7      0.72              0.10        13    0.055       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.040, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.20 GB write, 0.12 MB/s write, 0.19 GB read, 0.11 MB/s read, 3.0 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560649e57350#2 capacity: 304.00 MB usage: 14.73 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000101 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(795,14.18 MB,4.66492%) FilterBlock(27,201.42 KB,0.0647043%) IndexBlock(27,359.39 KB,0.11545%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 23 21:11:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:11:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:06.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:11:06 compute-1 ceph-mon[80135]: pgmap v890: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Nov 23 21:11:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:11:07 compute-1 nova_compute[230183]: 2025-11-23 21:11:07.255 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:07.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:11:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:08.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:11:08 compute-1 nova_compute[230183]: 2025-11-23 21:11:08.571 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:11:08 compute-1 nova_compute[230183]: 2025-11-23 21:11:08.572 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:11:08 compute-1 nova_compute[230183]: 2025-11-23 21:11:08.585 230187 DEBUG nova.compute.manager [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 23 21:11:08 compute-1 nova_compute[230183]: 2025-11-23 21:11:08.669 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:11:08 compute-1 nova_compute[230183]: 2025-11-23 21:11:08.670 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:11:08 compute-1 nova_compute[230183]: 2025-11-23 21:11:08.675 230187 DEBUG nova.virt.hardware [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 23 21:11:08 compute-1 nova_compute[230183]: 2025-11-23 21:11:08.675 230187 INFO nova.compute.claims [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Claim successful on node compute-1.ctlplane.example.com
Nov 23 21:11:08 compute-1 ceph-mon[80135]: pgmap v891: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Nov 23 21:11:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1354367898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:11:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/1250564092' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:11:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/1250564092' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:11:08 compute-1 nova_compute[230183]: 2025-11-23 21:11:08.759 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:11:09 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:11:09 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4195050678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:11:09 compute-1 nova_compute[230183]: 2025-11-23 21:11:09.185 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:11:09 compute-1 nova_compute[230183]: 2025-11-23 21:11:09.192 230187 DEBUG nova.compute.provider_tree [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:11:09 compute-1 nova_compute[230183]: 2025-11-23 21:11:09.207 230187 DEBUG nova.scheduler.client.report [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:11:09 compute-1 nova_compute[230183]: 2025-11-23 21:11:09.226 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:11:09 compute-1 nova_compute[230183]: 2025-11-23 21:11:09.227 230187 DEBUG nova.compute.manager [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 23 21:11:09 compute-1 nova_compute[230183]: 2025-11-23 21:11:09.270 230187 DEBUG nova.compute.manager [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 23 21:11:09 compute-1 nova_compute[230183]: 2025-11-23 21:11:09.270 230187 DEBUG nova.network.neutron [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 23 21:11:09 compute-1 nova_compute[230183]: 2025-11-23 21:11:09.284 230187 INFO nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 23 21:11:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:09.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:09 compute-1 nova_compute[230183]: 2025-11-23 21:11:09.298 230187 DEBUG nova.compute.manager [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 23 21:11:09 compute-1 nova_compute[230183]: 2025-11-23 21:11:09.408 230187 DEBUG nova.compute.manager [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 23 21:11:09 compute-1 nova_compute[230183]: 2025-11-23 21:11:09.410 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 23 21:11:09 compute-1 nova_compute[230183]: 2025-11-23 21:11:09.410 230187 INFO nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Creating image(s)
Nov 23 21:11:09 compute-1 nova_compute[230183]: 2025-11-23 21:11:09.433 230187 DEBUG nova.storage.rbd_utils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:11:09 compute-1 nova_compute[230183]: 2025-11-23 21:11:09.466 230187 DEBUG nova.storage.rbd_utils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:11:09 compute-1 nova_compute[230183]: 2025-11-23 21:11:09.499 230187 DEBUG nova.storage.rbd_utils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:11:09 compute-1 nova_compute[230183]: 2025-11-23 21:11:09.504 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:11:09 compute-1 nova_compute[230183]: 2025-11-23 21:11:09.528 230187 DEBUG nova.policy [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9fb5352c62684f2ba3a326a953a10dfe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '782593db60784ab8bff41fe87d72ff5f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 23 21:11:09 compute-1 nova_compute[230183]: 2025-11-23 21:11:09.566 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:11:09 compute-1 nova_compute[230183]: 2025-11-23 21:11:09.566 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:11:09 compute-1 nova_compute[230183]: 2025-11-23 21:11:09.567 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:11:09 compute-1 nova_compute[230183]: 2025-11-23 21:11:09.567 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:11:09 compute-1 nova_compute[230183]: 2025-11-23 21:11:09.591 230187 DEBUG nova.storage.rbd_utils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:11:09 compute-1 nova_compute[230183]: 2025-11-23 21:11:09.594 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:11:09 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/4229839563' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:11:09 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/4195050678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:11:09 compute-1 ceph-mon[80135]: pgmap v892: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 21:11:09 compute-1 nova_compute[230183]: 2025-11-23 21:11:09.835 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:09 compute-1 nova_compute[230183]: 2025-11-23 21:11:09.893 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:11:09 compute-1 nova_compute[230183]: 2025-11-23 21:11:09.979 230187 DEBUG nova.storage.rbd_utils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] resizing rbd image 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 23 21:11:10 compute-1 nova_compute[230183]: 2025-11-23 21:11:10.100 230187 DEBUG nova.objects.instance [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'migration_context' on Instance uuid 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:11:10 compute-1 nova_compute[230183]: 2025-11-23 21:11:10.120 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 23 21:11:10 compute-1 nova_compute[230183]: 2025-11-23 21:11:10.120 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Ensure instance console log exists: /var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 23 21:11:10 compute-1 nova_compute[230183]: 2025-11-23 21:11:10.121 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:11:10 compute-1 nova_compute[230183]: 2025-11-23 21:11:10.121 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:11:10 compute-1 nova_compute[230183]: 2025-11-23 21:11:10.122 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:11:10 compute-1 nova_compute[230183]: 2025-11-23 21:11:10.195 230187 DEBUG nova.network.neutron [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Successfully created port: bdbb1df8-a028-4685-9661-24563619eb80 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 23 21:11:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:11:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:10.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:11:10 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2881988689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:11:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:11.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:11 compute-1 nova_compute[230183]: 2025-11-23 21:11:11.623 230187 DEBUG nova.network.neutron [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Successfully updated port: bdbb1df8-a028-4685-9661-24563619eb80 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 23 21:11:11 compute-1 nova_compute[230183]: 2025-11-23 21:11:11.671 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:11:11 compute-1 nova_compute[230183]: 2025-11-23 21:11:11.672 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquired lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:11:11 compute-1 nova_compute[230183]: 2025-11-23 21:11:11.672 230187 DEBUG nova.network.neutron [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 23 21:11:11 compute-1 nova_compute[230183]: 2025-11-23 21:11:11.741 230187 DEBUG nova.compute.manager [req-8f870fea-b29d-4d8a-88bb-e6f622589e54 req-3305a24d-d027-469c-824c-297bfc5c47ec 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-changed-bdbb1df8-a028-4685-9661-24563619eb80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:11:11 compute-1 nova_compute[230183]: 2025-11-23 21:11:11.741 230187 DEBUG nova.compute.manager [req-8f870fea-b29d-4d8a-88bb-e6f622589e54 req-3305a24d-d027-469c-824c-297bfc5c47ec 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Refreshing instance network info cache due to event network-changed-bdbb1df8-a028-4685-9661-24563619eb80. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 23 21:11:11 compute-1 nova_compute[230183]: 2025-11-23 21:11:11.741 230187 DEBUG oslo_concurrency.lockutils [req-8f870fea-b29d-4d8a-88bb-e6f622589e54 req-3305a24d-d027-469c-824c-297bfc5c47ec 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:11:11 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Nov 23 21:11:11 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:11.758140) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 21:11:11 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Nov 23 21:11:11 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932271758186, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2373, "num_deletes": 251, "total_data_size": 6217556, "memory_usage": 6292320, "flush_reason": "Manual Compaction"}
Nov 23 21:11:11 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Nov 23 21:11:11 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1412639435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:11:11 compute-1 ceph-mon[80135]: pgmap v893: 337 pgs: 337 active+clean; 52 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 125 KiB/s wr, 1 op/s
Nov 23 21:11:11 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932271805614, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 4048390, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26129, "largest_seqno": 28497, "table_properties": {"data_size": 4038870, "index_size": 5950, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19848, "raw_average_key_size": 20, "raw_value_size": 4019905, "raw_average_value_size": 4118, "num_data_blocks": 261, "num_entries": 976, "num_filter_entries": 976, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763932059, "oldest_key_time": 1763932059, "file_creation_time": 1763932271, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:11:11 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 47771 microseconds, and 8905 cpu microseconds.
Nov 23 21:11:11 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:11:11 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:11.805844) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 4048390 bytes OK
Nov 23 21:11:11 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:11.806046) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Nov 23 21:11:11 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:11.807841) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Nov 23 21:11:11 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:11.807919) EVENT_LOG_v1 {"time_micros": 1763932271807910, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 21:11:11 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:11.807944) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 21:11:11 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 6207102, prev total WAL file size 6207102, number of live WAL files 2.
Nov 23 21:11:11 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:11:11 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:11.809571) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 23 21:11:11 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 21:11:11 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(3953KB)], [51(12MB)]
Nov 23 21:11:11 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932271809686, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 17091173, "oldest_snapshot_seqno": -1}
Nov 23 21:11:11 compute-1 nova_compute[230183]: 2025-11-23 21:11:11.824 230187 DEBUG nova.network.neutron [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 23 21:11:12 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 5847 keys, 14928601 bytes, temperature: kUnknown
Nov 23 21:11:12 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932272082717, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 14928601, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14888535, "index_size": 24340, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14661, "raw_key_size": 148729, "raw_average_key_size": 25, "raw_value_size": 14782089, "raw_average_value_size": 2528, "num_data_blocks": 994, "num_entries": 5847, "num_filter_entries": 5847, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763932271, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:11:12 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:11:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:12.083208) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 14928601 bytes
Nov 23 21:11:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:12.085914) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 62.6 rd, 54.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 12.4 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(7.9) write-amplify(3.7) OK, records in: 6365, records dropped: 518 output_compression: NoCompression
Nov 23 21:11:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:12.085943) EVENT_LOG_v1 {"time_micros": 1763932272085929, "job": 30, "event": "compaction_finished", "compaction_time_micros": 273219, "compaction_time_cpu_micros": 50926, "output_level": 6, "num_output_files": 1, "total_output_size": 14928601, "num_input_records": 6365, "num_output_records": 5847, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 21:11:12 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:11:12 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932272087433, "job": 30, "event": "table_file_deletion", "file_number": 53}
Nov 23 21:11:12 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:11:12 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932272092268, "job": 30, "event": "table_file_deletion", "file_number": 51}
Nov 23 21:11:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:11.809372) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:11:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:12.092501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:11:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:12.092514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:11:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:12.092517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:11:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:12.092523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:11:12 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:12.092527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:11:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:11:12 compute-1 nova_compute[230183]: 2025-11-23 21:11:12.257 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:11:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:12.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:11:12 compute-1 nova_compute[230183]: 2025-11-23 21:11:12.862 230187 DEBUG nova.network.neutron [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updating instance_info_cache with network_info: [{"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:11:12 compute-1 nova_compute[230183]: 2025-11-23 21:11:12.883 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Releasing lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:11:12 compute-1 nova_compute[230183]: 2025-11-23 21:11:12.883 230187 DEBUG nova.compute.manager [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Instance network_info: |[{"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 23 21:11:12 compute-1 nova_compute[230183]: 2025-11-23 21:11:12.884 230187 DEBUG oslo_concurrency.lockutils [req-8f870fea-b29d-4d8a-88bb-e6f622589e54 req-3305a24d-d027-469c-824c-297bfc5c47ec 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:11:12 compute-1 nova_compute[230183]: 2025-11-23 21:11:12.884 230187 DEBUG nova.network.neutron [req-8f870fea-b29d-4d8a-88bb-e6f622589e54 req-3305a24d-d027-469c-824c-297bfc5c47ec 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Refreshing network info cache for port bdbb1df8-a028-4685-9661-24563619eb80 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 23 21:11:12 compute-1 nova_compute[230183]: 2025-11-23 21:11:12.886 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Start _get_guest_xml network_info=[{"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'guest_format': None, 'encrypted': False, 'size': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': '3c45fa6c-8a99-4359-a34e-d89f4e1e77d0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 23 21:11:12 compute-1 nova_compute[230183]: 2025-11-23 21:11:12.890 230187 WARNING nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:11:12 compute-1 nova_compute[230183]: 2025-11-23 21:11:12.897 230187 DEBUG nova.virt.libvirt.host [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 23 21:11:12 compute-1 nova_compute[230183]: 2025-11-23 21:11:12.897 230187 DEBUG nova.virt.libvirt.host [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 23 21:11:12 compute-1 nova_compute[230183]: 2025-11-23 21:11:12.901 230187 DEBUG nova.virt.libvirt.host [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 23 21:11:12 compute-1 nova_compute[230183]: 2025-11-23 21:11:12.901 230187 DEBUG nova.virt.libvirt.host [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 23 21:11:12 compute-1 nova_compute[230183]: 2025-11-23 21:11:12.902 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 23 21:11:12 compute-1 nova_compute[230183]: 2025-11-23 21:11:12.902 230187 DEBUG nova.virt.hardware [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-23T21:05:42Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='56044b93-2979-48aa-b67f-c37e1b489306',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 23 21:11:12 compute-1 nova_compute[230183]: 2025-11-23 21:11:12.902 230187 DEBUG nova.virt.hardware [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 23 21:11:12 compute-1 nova_compute[230183]: 2025-11-23 21:11:12.902 230187 DEBUG nova.virt.hardware [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 23 21:11:12 compute-1 nova_compute[230183]: 2025-11-23 21:11:12.902 230187 DEBUG nova.virt.hardware [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 23 21:11:12 compute-1 nova_compute[230183]: 2025-11-23 21:11:12.903 230187 DEBUG nova.virt.hardware [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 23 21:11:12 compute-1 nova_compute[230183]: 2025-11-23 21:11:12.903 230187 DEBUG nova.virt.hardware [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 23 21:11:12 compute-1 nova_compute[230183]: 2025-11-23 21:11:12.903 230187 DEBUG nova.virt.hardware [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 23 21:11:12 compute-1 nova_compute[230183]: 2025-11-23 21:11:12.903 230187 DEBUG nova.virt.hardware [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 23 21:11:12 compute-1 nova_compute[230183]: 2025-11-23 21:11:12.903 230187 DEBUG nova.virt.hardware [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 23 21:11:12 compute-1 nova_compute[230183]: 2025-11-23 21:11:12.903 230187 DEBUG nova.virt.hardware [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 23 21:11:12 compute-1 nova_compute[230183]: 2025-11-23 21:11:12.904 230187 DEBUG nova.virt.hardware [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 23 21:11:12 compute-1 nova_compute[230183]: 2025-11-23 21:11:12.906 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:11:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:13.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:13 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 21:11:13 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2203487789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.395 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.419 230187 DEBUG nova.storage.rbd_utils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.422 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:11:13 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 21:11:13 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1161995047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.875 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.877 230187 DEBUG nova.virt.libvirt.vif [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:11:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1210792474',display_name='tempest-TestNetworkBasicOps-server-1210792474',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1210792474',id=6,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC9I5o3FOJoMlLS5RVHvg4JB6VMA0TLpRAHrRWOuj73hgQ5knZWkP8wznWff+IF5v3eA9GQgz9kKnWlcz54pfIskwjEMQ8tpar2NP2dJjbFuASygJ+AuXJaTUib24SH0fw==',key_name='tempest-TestNetworkBasicOps-192906804',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-jk4nm00m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:11:09Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=4bac23b8-7bcd-4f5e-89a8-b035a16ffe36,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.877 230187 DEBUG nova.network.os_vif_util [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.878 230187 DEBUG nova.network.os_vif_util [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=bdbb1df8-a028-4685-9661-24563619eb80,network=Network(aa502c12-d22c-490c-942b-57c2b1624866),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdbb1df8-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.879 230187 DEBUG nova.objects.instance [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'pci_devices' on Instance uuid 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.895 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] End _get_guest_xml xml=<domain type="kvm">
Nov 23 21:11:13 compute-1 nova_compute[230183]:   <uuid>4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</uuid>
Nov 23 21:11:13 compute-1 nova_compute[230183]:   <name>instance-00000006</name>
Nov 23 21:11:13 compute-1 nova_compute[230183]:   <memory>131072</memory>
Nov 23 21:11:13 compute-1 nova_compute[230183]:   <vcpu>1</vcpu>
Nov 23 21:11:13 compute-1 nova_compute[230183]:   <metadata>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <nova:name>tempest-TestNetworkBasicOps-server-1210792474</nova:name>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <nova:creationTime>2025-11-23 21:11:12</nova:creationTime>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <nova:flavor name="m1.nano">
Nov 23 21:11:13 compute-1 nova_compute[230183]:         <nova:memory>128</nova:memory>
Nov 23 21:11:13 compute-1 nova_compute[230183]:         <nova:disk>1</nova:disk>
Nov 23 21:11:13 compute-1 nova_compute[230183]:         <nova:swap>0</nova:swap>
Nov 23 21:11:13 compute-1 nova_compute[230183]:         <nova:ephemeral>0</nova:ephemeral>
Nov 23 21:11:13 compute-1 nova_compute[230183]:         <nova:vcpus>1</nova:vcpus>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       </nova:flavor>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <nova:owner>
Nov 23 21:11:13 compute-1 nova_compute[230183]:         <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 21:11:13 compute-1 nova_compute[230183]:         <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       </nova:owner>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <nova:ports>
Nov 23 21:11:13 compute-1 nova_compute[230183]:         <nova:port uuid="bdbb1df8-a028-4685-9661-24563619eb80">
Nov 23 21:11:13 compute-1 nova_compute[230183]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:         </nova:port>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       </nova:ports>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     </nova:instance>
Nov 23 21:11:13 compute-1 nova_compute[230183]:   </metadata>
Nov 23 21:11:13 compute-1 nova_compute[230183]:   <sysinfo type="smbios">
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <system>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <entry name="manufacturer">RDO</entry>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <entry name="product">OpenStack Compute</entry>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <entry name="serial">4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</entry>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <entry name="uuid">4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</entry>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <entry name="family">Virtual Machine</entry>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     </system>
Nov 23 21:11:13 compute-1 nova_compute[230183]:   </sysinfo>
Nov 23 21:11:13 compute-1 nova_compute[230183]:   <os>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <boot dev="hd"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <smbios mode="sysinfo"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:   </os>
Nov 23 21:11:13 compute-1 nova_compute[230183]:   <features>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <acpi/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <apic/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <vmcoreinfo/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:   </features>
Nov 23 21:11:13 compute-1 nova_compute[230183]:   <clock offset="utc">
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <timer name="pit" tickpolicy="delay"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <timer name="hpet" present="no"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:   </clock>
Nov 23 21:11:13 compute-1 nova_compute[230183]:   <cpu mode="host-model" match="exact">
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <topology sockets="1" cores="1" threads="1"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:   </cpu>
Nov 23 21:11:13 compute-1 nova_compute[230183]:   <devices>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <disk type="network" device="disk">
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <driver type="raw" cache="none"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <source protocol="rbd" name="vms/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk">
Nov 23 21:11:13 compute-1 nova_compute[230183]:         <host name="192.168.122.100" port="6789"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:         <host name="192.168.122.102" port="6789"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:         <host name="192.168.122.101" port="6789"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       </source>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <auth username="openstack">
Nov 23 21:11:13 compute-1 nova_compute[230183]:         <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <target dev="vda" bus="virtio"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <disk type="network" device="cdrom">
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <driver type="raw" cache="none"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <source protocol="rbd" name="vms/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk.config">
Nov 23 21:11:13 compute-1 nova_compute[230183]:         <host name="192.168.122.100" port="6789"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:         <host name="192.168.122.102" port="6789"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:         <host name="192.168.122.101" port="6789"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       </source>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <auth username="openstack">
Nov 23 21:11:13 compute-1 nova_compute[230183]:         <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <target dev="sda" bus="sata"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <interface type="ethernet">
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <mac address="fa:16:3e:f3:c9:f4"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <model type="virtio"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <driver name="vhost" rx_queue_size="512"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <mtu size="1442"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <target dev="tapbdbb1df8-a0"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     </interface>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <serial type="pty">
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <log file="/var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/console.log" append="off"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     </serial>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <video>
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <model type="virtio"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     </video>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <input type="tablet" bus="usb"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <rng model="virtio">
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <backend model="random">/dev/urandom</backend>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     </rng>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <controller type="usb" index="0"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     <memballoon model="virtio">
Nov 23 21:11:13 compute-1 nova_compute[230183]:       <stats period="10"/>
Nov 23 21:11:13 compute-1 nova_compute[230183]:     </memballoon>
Nov 23 21:11:13 compute-1 nova_compute[230183]:   </devices>
Nov 23 21:11:13 compute-1 nova_compute[230183]: </domain>
Nov 23 21:11:13 compute-1 nova_compute[230183]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.897 230187 DEBUG nova.compute.manager [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Preparing to wait for external event network-vif-plugged-bdbb1df8-a028-4685-9661-24563619eb80 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.897 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.898 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.898 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.899 230187 DEBUG nova.virt.libvirt.vif [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:11:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1210792474',display_name='tempest-TestNetworkBasicOps-server-1210792474',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1210792474',id=6,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC9I5o3FOJoMlLS5RVHvg4JB6VMA0TLpRAHrRWOuj73hgQ5knZWkP8wznWff+IF5v3eA9GQgz9kKnWlcz54pfIskwjEMQ8tpar2NP2dJjbFuASygJ+AuXJaTUib24SH0fw==',key_name='tempest-TestNetworkBasicOps-192906804',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-jk4nm00m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:11:09Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=4bac23b8-7bcd-4f5e-89a8-b035a16ffe36,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.900 230187 DEBUG nova.network.os_vif_util [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.900 230187 DEBUG nova.network.os_vif_util [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=bdbb1df8-a028-4685-9661-24563619eb80,network=Network(aa502c12-d22c-490c-942b-57c2b1624866),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdbb1df8-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.901 230187 DEBUG os_vif [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=bdbb1df8-a028-4685-9661-24563619eb80,network=Network(aa502c12-d22c-490c-942b-57c2b1624866),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdbb1df8-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.902 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.902 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.903 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.908 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.909 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbdbb1df8-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.909 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbdbb1df8-a0, col_values=(('external_ids', {'iface-id': 'bdbb1df8-a028-4685-9661-24563619eb80', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:c9:f4', 'vm-uuid': '4bac23b8-7bcd-4f5e-89a8-b035a16ffe36'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.911 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:13 compute-1 NetworkManager[49021]: <info>  [1763932273.9123] manager: (tapbdbb1df8-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.914 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.917 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.918 230187 INFO os_vif [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=bdbb1df8-a028-4685-9661-24563619eb80,network=Network(aa502c12-d22c-490c-942b-57c2b1624866),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdbb1df8-a0')
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.975 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.975 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.975 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No VIF found with MAC fa:16:3e:f3:c9:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 23 21:11:13 compute-1 nova_compute[230183]: 2025-11-23 21:11:13.976 230187 INFO nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Using config drive
Nov 23 21:11:14 compute-1 nova_compute[230183]: 2025-11-23 21:11:14.001 230187 DEBUG nova.storage.rbd_utils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:11:14 compute-1 ceph-mon[80135]: pgmap v894: 337 pgs: 337 active+clean; 52 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 125 KiB/s wr, 1 op/s
Nov 23 21:11:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2203487789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:11:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1161995047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:11:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:14.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:15 compute-1 nova_compute[230183]: 2025-11-23 21:11:15.242 230187 INFO nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Creating config drive at /var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/disk.config
Nov 23 21:11:15 compute-1 nova_compute[230183]: 2025-11-23 21:11:15.251 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7sm8t84h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:11:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:15.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:15 compute-1 nova_compute[230183]: 2025-11-23 21:11:15.378 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7sm8t84h" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:11:15 compute-1 nova_compute[230183]: 2025-11-23 21:11:15.409 230187 DEBUG nova.storage.rbd_utils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:11:15 compute-1 nova_compute[230183]: 2025-11-23 21:11:15.412 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/disk.config 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:11:15 compute-1 nova_compute[230183]: 2025-11-23 21:11:15.574 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/disk.config 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:11:15 compute-1 nova_compute[230183]: 2025-11-23 21:11:15.576 230187 INFO nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Deleting local config drive /var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/disk.config because it was imported into RBD.
Nov 23 21:11:15 compute-1 kernel: tapbdbb1df8-a0: entered promiscuous mode
Nov 23 21:11:15 compute-1 NetworkManager[49021]: <info>  [1763932275.6185] manager: (tapbdbb1df8-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Nov 23 21:11:15 compute-1 nova_compute[230183]: 2025-11-23 21:11:15.618 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:15 compute-1 ovn_controller[132845]: 2025-11-23T21:11:15Z|00074|binding|INFO|Claiming lport bdbb1df8-a028-4685-9661-24563619eb80 for this chassis.
Nov 23 21:11:15 compute-1 ovn_controller[132845]: 2025-11-23T21:11:15Z|00075|binding|INFO|bdbb1df8-a028-4685-9661-24563619eb80: Claiming fa:16:3e:f3:c9:f4 10.100.0.12
Nov 23 21:11:15 compute-1 nova_compute[230183]: 2025-11-23 21:11:15.623 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:15 compute-1 nova_compute[230183]: 2025-11-23 21:11:15.625 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:15 compute-1 systemd-machined[193469]: New machine qemu-4-instance-00000006.
Nov 23 21:11:15 compute-1 systemd-udevd[237574]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 21:11:15 compute-1 NetworkManager[49021]: <info>  [1763932275.6615] device (tapbdbb1df8-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 21:11:15 compute-1 NetworkManager[49021]: <info>  [1763932275.6625] device (tapbdbb1df8-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 23 21:11:15 compute-1 systemd[1]: Started Virtual Machine qemu-4-instance-00000006.
Nov 23 21:11:15 compute-1 nova_compute[230183]: 2025-11-23 21:11:15.686 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:15 compute-1 ovn_controller[132845]: 2025-11-23T21:11:15Z|00076|binding|INFO|Setting lport bdbb1df8-a028-4685-9661-24563619eb80 ovn-installed in OVS
Nov 23 21:11:15 compute-1 nova_compute[230183]: 2025-11-23 21:11:15.690 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:15 compute-1 ovn_controller[132845]: 2025-11-23T21:11:15Z|00077|binding|INFO|Setting lport bdbb1df8-a028-4685-9661-24563619eb80 up in Southbound
Nov 23 21:11:15 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.760 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:c9:f4 10.100.0.12'], port_security=['fa:16:3e:f3:c9:f4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4bac23b8-7bcd-4f5e-89a8-b035a16ffe36', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa502c12-d22c-490c-942b-57c2b1624866', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '30b87ecc-e7bf-46f1-a605-8bcfe0ecba45', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8207d226-2b2e-4ad5-9d7b-3777cdc61652, chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=bdbb1df8-a028-4685-9661-24563619eb80) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:11:15 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.761 142158 INFO neutron.agent.ovn.metadata.agent [-] Port bdbb1df8-a028-4685-9661-24563619eb80 in datapath aa502c12-d22c-490c-942b-57c2b1624866 bound to our chassis
Nov 23 21:11:15 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.762 142158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa502c12-d22c-490c-942b-57c2b1624866
Nov 23 21:11:15 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.775 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[ee3a59c1-d9a7-494a-9848-ef30b4beee3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:15 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.775 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa502c12-d1 in ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 23 21:11:15 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.777 233901 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa502c12-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 23 21:11:15 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.777 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[e7f8a4f1-ccb6-45b1-9d1e-82a6a9050258]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:15 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.778 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5ddb0a23-e88e-4032-b4de-e39a5dda4e1c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:15 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.794 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[890f846b-84af-475b-a06e-b1b848f48ff1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:15 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.815 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc828ce-d9ff-47cf-88c2-c3d7002a0e71]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:15 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.844 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[943736b8-65ec-4de1-b9aa-fc7997494611]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:15 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.853 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[0b953b2f-4e56-43e7-bb48-f9167a836c5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:15 compute-1 NetworkManager[49021]: <info>  [1763932275.8545] manager: (tapaa502c12-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/50)
Nov 23 21:11:15 compute-1 systemd-udevd[237576]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 21:11:15 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.885 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[7f0d20ff-959d-4ce7-a429-215a4e792f25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:15 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.888 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[0bdb29f8-380a-42d9-8990-1efc07d0059e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:15 compute-1 NetworkManager[49021]: <info>  [1763932275.9078] device (tapaa502c12-d0): carrier: link connected
Nov 23 21:11:15 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.913 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[cf3bb504-75a5-4299-942a-3cb199d8337b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:15 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.930 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[4e2420b9-6459-440f-9b61-b40282b742d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa502c12-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:8b:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422448, 'reachable_time': 42322, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237609, 'error': None, 'target': 'ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:15 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.945 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[c61bb352-ea26-4ce8-aab6-c7ec918d7af9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:8b05'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 422448, 'tstamp': 422448}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237610, 'error': None, 'target': 'ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:15 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.961 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[79b8e250-f04c-4d9f-b028-c96146080ec5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa502c12-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:8b:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422448, 'reachable_time': 42322, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 237611, 'error': None, 'target': 'ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:15 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.996 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[48b9365b-e4ce-4c77-963f-770a9ca0daca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.035 230187 DEBUG nova.network.neutron [req-8f870fea-b29d-4d8a-88bb-e6f622589e54 req-3305a24d-d027-469c-824c-297bfc5c47ec 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updated VIF entry in instance network info cache for port bdbb1df8-a028-4685-9661-24563619eb80. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.036 230187 DEBUG nova.network.neutron [req-8f870fea-b29d-4d8a-88bb-e6f622589e54 req-3305a24d-d027-469c-824c-297bfc5c47ec 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updating instance_info_cache with network_info: [{"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.050 230187 DEBUG oslo_concurrency.lockutils [req-8f870fea-b29d-4d8a-88bb-e6f622589e54 req-3305a24d-d027-469c-824c-297bfc5c47ec 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:16.061 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[cf21341d-a96b-497d-993c-1b9452b4a282]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:16.062 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa502c12-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:16.062 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:16.063 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa502c12-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.064 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:16 compute-1 NetworkManager[49021]: <info>  [1763932276.0652] manager: (tapaa502c12-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Nov 23 21:11:16 compute-1 kernel: tapaa502c12-d0: entered promiscuous mode
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.067 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:16.068 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa502c12-d0, col_values=(('external_ids', {'iface-id': '882afaa1-9000-493d-808e-b1d906b6e642'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:11:16 compute-1 ovn_controller[132845]: 2025-11-23T21:11:16Z|00078|binding|INFO|Releasing lport 882afaa1-9000-493d-808e-b1d906b6e642 from this chassis (sb_readonly=0)
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.083 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:16.084 142158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa502c12-d22c-490c-942b-57c2b1624866.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa502c12-d22c-490c-942b-57c2b1624866.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:16.085 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[587800fc-69d0-4faf-be0e-ae59fe288aa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:16.085 142158 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]: global
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]:     log         /dev/log local0 debug
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]:     log-tag     haproxy-metadata-proxy-aa502c12-d22c-490c-942b-57c2b1624866
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]:     user        root
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]:     group       root
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]:     maxconn     1024
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]:     pidfile     /var/lib/neutron/external/pids/aa502c12-d22c-490c-942b-57c2b1624866.pid.haproxy
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]:     daemon
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]: defaults
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]:     log global
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]:     mode http
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]:     option httplog
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]:     option dontlognull
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]:     option http-server-close
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]:     option forwardfor
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]:     retries                 3
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]:     timeout http-request    30s
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]:     timeout connect         30s
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]:     timeout client          32s
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]:     timeout server          32s
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]:     timeout http-keep-alive 30s
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]: listen listener
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]:     bind 169.254.169.254:80
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]:     server metadata /var/lib/neutron/metadata_proxy
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]:     http-request add-header X-OVN-Network-ID aa502c12-d22c-490c-942b-57c2b1624866
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 23 21:11:16 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:16.086 142158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866', 'env', 'PROCESS_TAG=haproxy-aa502c12-d22c-490c-942b-57c2b1624866', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa502c12-d22c-490c-942b-57c2b1624866.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.202 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932276.2016847, 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.202 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] VM Started (Lifecycle Event)
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.235 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.238 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932276.2026641, 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.239 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] VM Paused (Lifecycle Event)
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.242 230187 DEBUG nova.compute.manager [req-16f741ad-7acf-4753-8335-13f6e75cb21d req-ca2f103e-0009-4fa9-bdc2-a779a773660f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-vif-plugged-bdbb1df8-a028-4685-9661-24563619eb80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.242 230187 DEBUG oslo_concurrency.lockutils [req-16f741ad-7acf-4753-8335-13f6e75cb21d req-ca2f103e-0009-4fa9-bdc2-a779a773660f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.242 230187 DEBUG oslo_concurrency.lockutils [req-16f741ad-7acf-4753-8335-13f6e75cb21d req-ca2f103e-0009-4fa9-bdc2-a779a773660f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.243 230187 DEBUG oslo_concurrency.lockutils [req-16f741ad-7acf-4753-8335-13f6e75cb21d req-ca2f103e-0009-4fa9-bdc2-a779a773660f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.243 230187 DEBUG nova.compute.manager [req-16f741ad-7acf-4753-8335-13f6e75cb21d req-ca2f103e-0009-4fa9-bdc2-a779a773660f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Processing event network-vif-plugged-bdbb1df8-a028-4685-9661-24563619eb80 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.244 230187 DEBUG nova.compute.manager [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.247 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.249 230187 INFO nova.virt.libvirt.driver [-] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Instance spawned successfully.
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.250 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.254 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.258 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932276.246582, 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.259 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] VM Resumed (Lifecycle Event)
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.267 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.267 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.268 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.268 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.269 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.270 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.276 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.280 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.312 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 23 21:11:16 compute-1 ceph-mon[80135]: pgmap v895: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.343 230187 INFO nova.compute.manager [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Took 6.93 seconds to spawn the instance on the hypervisor.
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.345 230187 DEBUG nova.compute.manager [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.411 230187 INFO nova.compute.manager [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Took 7.78 seconds to build instance.
Nov 23 21:11:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:16.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:16 compute-1 nova_compute[230183]: 2025-11-23 21:11:16.427 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:11:16 compute-1 podman[237685]: 2025-11-23 21:11:16.421107903 +0000 UTC m=+0.031649625 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 21:11:16 compute-1 podman[237685]: 2025-11-23 21:11:16.691458756 +0000 UTC m=+0.302000418 container create ae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 21:11:16 compute-1 systemd[1]: Started libpod-conmon-ae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600.scope.
Nov 23 21:11:16 compute-1 systemd[1]: Started libcrun container.
Nov 23 21:11:16 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d0c50b76b192f5ce5a4fd663eee6064b85b526a900eeee678e7ce0a629a71ae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 21:11:16 compute-1 podman[237685]: 2025-11-23 21:11:16.83177424 +0000 UTC m=+0.442315862 container init ae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 23 21:11:16 compute-1 podman[237685]: 2025-11-23 21:11:16.83778646 +0000 UTC m=+0.448328092 container start ae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 23 21:11:16 compute-1 neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866[237700]: [NOTICE]   (237704) : New worker (237706) forked
Nov 23 21:11:16 compute-1 neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866[237700]: [NOTICE]   (237704) : Loading success.
Nov 23 21:11:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:11:17 compute-1 nova_compute[230183]: 2025-11-23 21:11:17.260 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:11:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:17.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:11:18 compute-1 ceph-mon[80135]: pgmap v896: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 23 21:11:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:11:18 compute-1 nova_compute[230183]: 2025-11-23 21:11:18.384 230187 DEBUG nova.compute.manager [req-93eecc34-08b6-4a1d-926d-7359a7a4080c req-d40072df-e965-4802-b6cc-b580d2ede504 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-vif-plugged-bdbb1df8-a028-4685-9661-24563619eb80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:11:18 compute-1 nova_compute[230183]: 2025-11-23 21:11:18.389 230187 DEBUG oslo_concurrency.lockutils [req-93eecc34-08b6-4a1d-926d-7359a7a4080c req-d40072df-e965-4802-b6cc-b580d2ede504 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:11:18 compute-1 nova_compute[230183]: 2025-11-23 21:11:18.395 230187 DEBUG oslo_concurrency.lockutils [req-93eecc34-08b6-4a1d-926d-7359a7a4080c req-d40072df-e965-4802-b6cc-b580d2ede504 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:11:18 compute-1 nova_compute[230183]: 2025-11-23 21:11:18.399 230187 DEBUG oslo_concurrency.lockutils [req-93eecc34-08b6-4a1d-926d-7359a7a4080c req-d40072df-e965-4802-b6cc-b580d2ede504 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:11:18 compute-1 nova_compute[230183]: 2025-11-23 21:11:18.402 230187 DEBUG nova.compute.manager [req-93eecc34-08b6-4a1d-926d-7359a7a4080c req-d40072df-e965-4802-b6cc-b580d2ede504 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] No waiting events found dispatching network-vif-plugged-bdbb1df8-a028-4685-9661-24563619eb80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:11:18 compute-1 nova_compute[230183]: 2025-11-23 21:11:18.404 230187 WARNING nova.compute.manager [req-93eecc34-08b6-4a1d-926d-7359a7a4080c req-d40072df-e965-4802-b6cc-b580d2ede504 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received unexpected event network-vif-plugged-bdbb1df8-a028-4685-9661-24563619eb80 for instance with vm_state active and task_state None.
Nov 23 21:11:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:18.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:18 compute-1 nova_compute[230183]: 2025-11-23 21:11:18.912 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:11:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:19.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:11:20 compute-1 ceph-mon[80135]: pgmap v897: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 193 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Nov 23 21:11:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:20.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:11:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:21.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:11:21 compute-1 NetworkManager[49021]: <info>  [1763932281.3712] manager: (patch-br-int-to-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Nov 23 21:11:21 compute-1 NetworkManager[49021]: <info>  [1763932281.3720] manager: (patch-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Nov 23 21:11:21 compute-1 ovn_controller[132845]: 2025-11-23T21:11:21Z|00079|binding|INFO|Releasing lport 882afaa1-9000-493d-808e-b1d906b6e642 from this chassis (sb_readonly=0)
Nov 23 21:11:21 compute-1 nova_compute[230183]: 2025-11-23 21:11:21.372 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:21 compute-1 nova_compute[230183]: 2025-11-23 21:11:21.421 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:21 compute-1 ovn_controller[132845]: 2025-11-23T21:11:21Z|00080|binding|INFO|Releasing lport 882afaa1-9000-493d-808e-b1d906b6e642 from this chassis (sb_readonly=0)
Nov 23 21:11:21 compute-1 nova_compute[230183]: 2025-11-23 21:11:21.425 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:21 compute-1 podman[237720]: 2025-11-23 21:11:21.731929929 +0000 UTC m=+0.136230996 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 23 21:11:21 compute-1 podman[237719]: 2025-11-23 21:11:21.747140304 +0000 UTC m=+0.164613162 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 23 21:11:21 compute-1 nova_compute[230183]: 2025-11-23 21:11:21.819 230187 DEBUG nova.compute.manager [req-9a81574b-2545-4e44-892e-14857fba333e req-f760c495-ed77-4adc-a48a-bfb1781d5806 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-changed-bdbb1df8-a028-4685-9661-24563619eb80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:11:21 compute-1 nova_compute[230183]: 2025-11-23 21:11:21.819 230187 DEBUG nova.compute.manager [req-9a81574b-2545-4e44-892e-14857fba333e req-f760c495-ed77-4adc-a48a-bfb1781d5806 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Refreshing instance network info cache due to event network-changed-bdbb1df8-a028-4685-9661-24563619eb80. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 23 21:11:21 compute-1 nova_compute[230183]: 2025-11-23 21:11:21.820 230187 DEBUG oslo_concurrency.lockutils [req-9a81574b-2545-4e44-892e-14857fba333e req-f760c495-ed77-4adc-a48a-bfb1781d5806 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:11:21 compute-1 nova_compute[230183]: 2025-11-23 21:11:21.820 230187 DEBUG oslo_concurrency.lockutils [req-9a81574b-2545-4e44-892e-14857fba333e req-f760c495-ed77-4adc-a48a-bfb1781d5806 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:11:21 compute-1 nova_compute[230183]: 2025-11-23 21:11:21.820 230187 DEBUG nova.network.neutron [req-9a81574b-2545-4e44-892e-14857fba333e req-f760c495-ed77-4adc-a48a-bfb1781d5806 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Refreshing network info cache for port bdbb1df8-a028-4685-9661-24563619eb80 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 23 21:11:22 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:11:22 compute-1 nova_compute[230183]: 2025-11-23 21:11:22.264 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:22.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:22 compute-1 ceph-mon[80135]: pgmap v898: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 23 21:11:23 compute-1 nova_compute[230183]: 2025-11-23 21:11:23.055 230187 DEBUG nova.network.neutron [req-9a81574b-2545-4e44-892e-14857fba333e req-f760c495-ed77-4adc-a48a-bfb1781d5806 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updated VIF entry in instance network info cache for port bdbb1df8-a028-4685-9661-24563619eb80. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 23 21:11:23 compute-1 nova_compute[230183]: 2025-11-23 21:11:23.057 230187 DEBUG nova.network.neutron [req-9a81574b-2545-4e44-892e-14857fba333e req-f760c495-ed77-4adc-a48a-bfb1781d5806 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updating instance_info_cache with network_info: [{"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:11:23 compute-1 nova_compute[230183]: 2025-11-23 21:11:23.073 230187 DEBUG oslo_concurrency.lockutils [req-9a81574b-2545-4e44-892e-14857fba333e req-f760c495-ed77-4adc-a48a-bfb1781d5806 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:11:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:23.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:23 compute-1 nova_compute[230183]: 2025-11-23 21:11:23.914 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:24.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:24 compute-1 ceph-mon[80135]: pgmap v899: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 100 op/s
Nov 23 21:11:25 compute-1 sudo[237763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:11:25 compute-1 sudo[237763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:11:25 compute-1 sudo[237763]: pam_unix(sudo:session): session closed for user root
Nov 23 21:11:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:25.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:26.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:26 compute-1 ceph-mon[80135]: pgmap v900: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 100 op/s
Nov 23 21:11:26 compute-1 podman[237789]: 2025-11-23 21:11:26.658953335 +0000 UTC m=+0.065571380 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 21:11:27 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:11:27 compute-1 nova_compute[230183]: 2025-11-23 21:11:27.265 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:27.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:28.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:28 compute-1 ceph-mon[80135]: pgmap v901: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 23 21:11:28 compute-1 nova_compute[230183]: 2025-11-23 21:11:28.916 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:29.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:30 compute-1 ovn_controller[132845]: 2025-11-23T21:11:30Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f3:c9:f4 10.100.0.12
Nov 23 21:11:30 compute-1 ovn_controller[132845]: 2025-11-23T21:11:30Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f3:c9:f4 10.100.0.12
Nov 23 21:11:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:11:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:30.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:11:30 compute-1 ceph-mon[80135]: pgmap v902: 337 pgs: 337 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 106 KiB/s wr, 93 op/s
Nov 23 21:11:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:31.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:32 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:11:32 compute-1 nova_compute[230183]: 2025-11-23 21:11:32.267 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:32.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:32 compute-1 ceph-mon[80135]: pgmap v903: 337 pgs: 337 active+clean; 120 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Nov 23 21:11:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:33.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:11:33 compute-1 nova_compute[230183]: 2025-11-23 21:11:33.951 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:11:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:34.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:11:34 compute-1 ceph-mon[80135]: pgmap v904: 337 pgs: 337 active+clean; 120 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Nov 23 21:11:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:35.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:36 compute-1 nova_compute[230183]: 2025-11-23 21:11:36.237 230187 INFO nova.compute.manager [None req-0ae311a4-a435-4e3e-a940-7dd84b2a1769 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Get console output
Nov 23 21:11:36 compute-1 nova_compute[230183]: 2025-11-23 21:11:36.241 234120 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 23 21:11:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:36.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:36 compute-1 ceph-mon[80135]: pgmap v905: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 23 21:11:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:36.735 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:11:36 compute-1 nova_compute[230183]: 2025-11-23 21:11:36.736 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:36 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:36.737 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 23 21:11:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:11:37 compute-1 nova_compute[230183]: 2025-11-23 21:11:37.267 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:37.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:11:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:38.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:11:38 compute-1 ceph-mon[80135]: pgmap v906: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 23 21:11:38 compute-1 nova_compute[230183]: 2025-11-23 21:11:38.702 230187 DEBUG oslo_concurrency.lockutils [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "interface-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:11:38 compute-1 nova_compute[230183]: 2025-11-23 21:11:38.703 230187 DEBUG oslo_concurrency.lockutils [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "interface-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:11:38 compute-1 nova_compute[230183]: 2025-11-23 21:11:38.704 230187 DEBUG nova.objects.instance [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'flavor' on Instance uuid 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:11:38 compute-1 nova_compute[230183]: 2025-11-23 21:11:38.953 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:39 compute-1 nova_compute[230183]: 2025-11-23 21:11:39.049 230187 DEBUG nova.objects.instance [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'pci_requests' on Instance uuid 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:11:39 compute-1 nova_compute[230183]: 2025-11-23 21:11:39.058 230187 DEBUG nova.network.neutron [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 23 21:11:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:39.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:39 compute-1 nova_compute[230183]: 2025-11-23 21:11:39.410 230187 DEBUG nova.policy [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9fb5352c62684f2ba3a326a953a10dfe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '782593db60784ab8bff41fe87d72ff5f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 23 21:11:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:11:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:40.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:11:40 compute-1 ceph-mon[80135]: pgmap v907: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 23 21:11:41 compute-1 nova_compute[230183]: 2025-11-23 21:11:41.257 230187 DEBUG nova.network.neutron [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Successfully created port: 9852de9e-899c-4a7c-8268-07fee5003eac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 23 21:11:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:41.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:11:42 compute-1 nova_compute[230183]: 2025-11-23 21:11:42.269 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:42.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:42 compute-1 nova_compute[230183]: 2025-11-23 21:11:42.488 230187 DEBUG nova.network.neutron [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Successfully updated port: 9852de9e-899c-4a7c-8268-07fee5003eac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 23 21:11:42 compute-1 nova_compute[230183]: 2025-11-23 21:11:42.500 230187 DEBUG oslo_concurrency.lockutils [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:11:42 compute-1 nova_compute[230183]: 2025-11-23 21:11:42.500 230187 DEBUG oslo_concurrency.lockutils [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquired lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:11:42 compute-1 nova_compute[230183]: 2025-11-23 21:11:42.501 230187 DEBUG nova.network.neutron [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 23 21:11:42 compute-1 nova_compute[230183]: 2025-11-23 21:11:42.625 230187 DEBUG nova.compute.manager [req-a9897480-a1b3-44d9-9dd5-baa198defd0b req-4d685653-4467-4056-9a73-768d72a6809e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-changed-9852de9e-899c-4a7c-8268-07fee5003eac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:11:42 compute-1 nova_compute[230183]: 2025-11-23 21:11:42.626 230187 DEBUG nova.compute.manager [req-a9897480-a1b3-44d9-9dd5-baa198defd0b req-4d685653-4467-4056-9a73-768d72a6809e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Refreshing instance network info cache due to event network-changed-9852de9e-899c-4a7c-8268-07fee5003eac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 23 21:11:42 compute-1 nova_compute[230183]: 2025-11-23 21:11:42.626 230187 DEBUG oslo_concurrency.lockutils [req-a9897480-a1b3-44d9-9dd5-baa198defd0b req-4d685653-4467-4056-9a73-768d72a6809e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:11:42 compute-1 ceph-mon[80135]: pgmap v908: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 233 KiB/s rd, 2.1 MiB/s wr, 45 op/s
Nov 23 21:11:42 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:42.739 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:11:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:43.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:43 compute-1 ceph-mon[80135]: pgmap v909: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 81 KiB/s wr, 11 op/s
Nov 23 21:11:43 compute-1 nova_compute[230183]: 2025-11-23 21:11:43.995 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:44 compute-1 sudo[237820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:11:44 compute-1 sudo[237820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:11:44 compute-1 sudo[237820]: pam_unix(sudo:session): session closed for user root
Nov 23 21:11:44 compute-1 sudo[237845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 21:11:44 compute-1 sudo[237845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.413 230187 DEBUG nova.network.neutron [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updating instance_info_cache with network_info: [{"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.433 230187 DEBUG oslo_concurrency.lockutils [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Releasing lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.434 230187 DEBUG oslo_concurrency.lockutils [req-a9897480-a1b3-44d9-9dd5-baa198defd0b req-4d685653-4467-4056-9a73-768d72a6809e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.434 230187 DEBUG nova.network.neutron [req-a9897480-a1b3-44d9-9dd5-baa198defd0b req-4d685653-4467-4056-9a73-768d72a6809e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Refreshing network info cache for port 9852de9e-899c-4a7c-8268-07fee5003eac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.437 230187 DEBUG nova.virt.libvirt.vif [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:11:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1210792474',display_name='tempest-TestNetworkBasicOps-server-1210792474',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1210792474',id=6,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC9I5o3FOJoMlLS5RVHvg4JB6VMA0TLpRAHrRWOuj73hgQ5knZWkP8wznWff+IF5v3eA9GQgz9kKnWlcz54pfIskwjEMQ8tpar2NP2dJjbFuASygJ+AuXJaTUib24SH0fw==',key_name='tempest-TestNetworkBasicOps-192906804',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:11:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-jk4nm00m',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:11:16Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=4bac23b8-7bcd-4f5e-89a8-b035a16ffe36,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.437 230187 DEBUG nova.network.os_vif_util [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.438 230187 DEBUG nova.network.os_vif_util [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:9a:cf,bridge_name='br-int',has_traffic_filtering=True,id=9852de9e-899c-4a7c-8268-07fee5003eac,network=Network(a53cafa8-a74e-467c-9117-a31bd6c650ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9852de9e-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.438 230187 DEBUG os_vif [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:9a:cf,bridge_name='br-int',has_traffic_filtering=True,id=9852de9e-899c-4a7c-8268-07fee5003eac,network=Network(a53cafa8-a74e-467c-9117-a31bd6c650ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9852de9e-89') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.438 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.439 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.439 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.441 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.441 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9852de9e-89, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.441 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9852de9e-89, col_values=(('external_ids', {'iface-id': '9852de9e-899c-4a7c-8268-07fee5003eac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:9a:cf', 'vm-uuid': '4bac23b8-7bcd-4f5e-89a8-b035a16ffe36'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.443 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:44 compute-1 NetworkManager[49021]: <info>  [1763932304.4442] manager: (tap9852de9e-89): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.449 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.450 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.451 230187 INFO os_vif [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:9a:cf,bridge_name='br-int',has_traffic_filtering=True,id=9852de9e-899c-4a7c-8268-07fee5003eac,network=Network(a53cafa8-a74e-467c-9117-a31bd6c650ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9852de9e-89')
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.451 230187 DEBUG nova.virt.libvirt.vif [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:11:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1210792474',display_name='tempest-TestNetworkBasicOps-server-1210792474',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1210792474',id=6,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC9I5o3FOJoMlLS5RVHvg4JB6VMA0TLpRAHrRWOuj73hgQ5knZWkP8wznWff+IF5v3eA9GQgz9kKnWlcz54pfIskwjEMQ8tpar2NP2dJjbFuASygJ+AuXJaTUib24SH0fw==',key_name='tempest-TestNetworkBasicOps-192906804',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:11:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-jk4nm00m',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:11:16Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=4bac23b8-7bcd-4f5e-89a8-b035a16ffe36,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.452 230187 DEBUG nova.network.os_vif_util [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.452 230187 DEBUG nova.network.os_vif_util [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:9a:cf,bridge_name='br-int',has_traffic_filtering=True,id=9852de9e-899c-4a7c-8268-07fee5003eac,network=Network(a53cafa8-a74e-467c-9117-a31bd6c650ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9852de9e-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:11:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.455 230187 DEBUG nova.virt.libvirt.guest [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] attach device xml: <interface type="ethernet">
Nov 23 21:11:44 compute-1 nova_compute[230183]:   <mac address="fa:16:3e:1a:9a:cf"/>
Nov 23 21:11:44 compute-1 nova_compute[230183]:   <model type="virtio"/>
Nov 23 21:11:44 compute-1 nova_compute[230183]:   <driver name="vhost" rx_queue_size="512"/>
Nov 23 21:11:44 compute-1 nova_compute[230183]:   <mtu size="1442"/>
Nov 23 21:11:44 compute-1 nova_compute[230183]:   <target dev="tap9852de9e-89"/>
Nov 23 21:11:44 compute-1 nova_compute[230183]: </interface>
Nov 23 21:11:44 compute-1 nova_compute[230183]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 23 21:11:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:11:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:44.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:11:44 compute-1 kernel: tap9852de9e-89: entered promiscuous mode
Nov 23 21:11:44 compute-1 NetworkManager[49021]: <info>  [1763932304.4664] manager: (tap9852de9e-89): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.467 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:44 compute-1 ovn_controller[132845]: 2025-11-23T21:11:44Z|00081|binding|INFO|Claiming lport 9852de9e-899c-4a7c-8268-07fee5003eac for this chassis.
Nov 23 21:11:44 compute-1 ovn_controller[132845]: 2025-11-23T21:11:44Z|00082|binding|INFO|9852de9e-899c-4a7c-8268-07fee5003eac: Claiming fa:16:3e:1a:9a:cf 10.100.0.23
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.475 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:9a:cf 10.100.0.23'], port_security=['fa:16:3e:1a:9a:cf 10.100.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': '4bac23b8-7bcd-4f5e-89a8-b035a16ffe36', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a53cafa8-a74e-467c-9117-a31bd6c650ae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cfd1f7f1-25d4-42fe-ac59-ece898bff9bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c22c132b-3565-4344-9558-f1d93c19cb57, chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=9852de9e-899c-4a7c-8268-07fee5003eac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.477 142158 INFO neutron.agent.ovn.metadata.agent [-] Port 9852de9e-899c-4a7c-8268-07fee5003eac in datapath a53cafa8-a74e-467c-9117-a31bd6c650ae bound to our chassis
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.478 142158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a53cafa8-a74e-467c-9117-a31bd6c650ae
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.498 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[d4f88ddc-a394-4add-91ed-400218c68b6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.499 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa53cafa8-a1 in ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.501 233901 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa53cafa8-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.502 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[8c407176-758c-45a1-9f71-57e81287c5fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.502 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[2e0a8e47-7c39-463f-988d-26e4ad6dcff5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:44 compute-1 systemd-udevd[237877]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.510 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.515 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[383c9d50-c97e-475f-a1e5-c35c09d77d8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:44 compute-1 ovn_controller[132845]: 2025-11-23T21:11:44Z|00083|binding|INFO|Setting lport 9852de9e-899c-4a7c-8268-07fee5003eac ovn-installed in OVS
Nov 23 21:11:44 compute-1 ovn_controller[132845]: 2025-11-23T21:11:44Z|00084|binding|INFO|Setting lport 9852de9e-899c-4a7c-8268-07fee5003eac up in Southbound
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.521 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:44 compute-1 NetworkManager[49021]: <info>  [1763932304.5261] device (tap9852de9e-89): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 21:11:44 compute-1 NetworkManager[49021]: <info>  [1763932304.5275] device (tap9852de9e-89): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.541 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[cad880d7-0982-4bd9-b126-a7a5e274b07c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.560 230187 DEBUG nova.virt.libvirt.driver [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.562 230187 DEBUG nova.virt.libvirt.driver [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.562 230187 DEBUG nova.virt.libvirt.driver [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No VIF found with MAC fa:16:3e:f3:c9:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.562 230187 DEBUG nova.virt.libvirt.driver [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No VIF found with MAC fa:16:3e:1a:9a:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.574 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[9cc34fb8-2db9-4b3c-8ba2-6a29ce89e412]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.580 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[7faea997-9739-408d-9add-c3e9d9b37b88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:44 compute-1 NetworkManager[49021]: <info>  [1763932304.5823] manager: (tapa53cafa8-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/56)
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.591 230187 DEBUG nova.virt.libvirt.guest [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 21:11:44 compute-1 nova_compute[230183]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 21:11:44 compute-1 nova_compute[230183]:   <nova:name>tempest-TestNetworkBasicOps-server-1210792474</nova:name>
Nov 23 21:11:44 compute-1 nova_compute[230183]:   <nova:creationTime>2025-11-23 21:11:44</nova:creationTime>
Nov 23 21:11:44 compute-1 nova_compute[230183]:   <nova:flavor name="m1.nano">
Nov 23 21:11:44 compute-1 nova_compute[230183]:     <nova:memory>128</nova:memory>
Nov 23 21:11:44 compute-1 nova_compute[230183]:     <nova:disk>1</nova:disk>
Nov 23 21:11:44 compute-1 nova_compute[230183]:     <nova:swap>0</nova:swap>
Nov 23 21:11:44 compute-1 nova_compute[230183]:     <nova:ephemeral>0</nova:ephemeral>
Nov 23 21:11:44 compute-1 nova_compute[230183]:     <nova:vcpus>1</nova:vcpus>
Nov 23 21:11:44 compute-1 nova_compute[230183]:   </nova:flavor>
Nov 23 21:11:44 compute-1 nova_compute[230183]:   <nova:owner>
Nov 23 21:11:44 compute-1 nova_compute[230183]:     <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 21:11:44 compute-1 nova_compute[230183]:     <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 21:11:44 compute-1 nova_compute[230183]:   </nova:owner>
Nov 23 21:11:44 compute-1 nova_compute[230183]:   <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 21:11:44 compute-1 nova_compute[230183]:   <nova:ports>
Nov 23 21:11:44 compute-1 nova_compute[230183]:     <nova:port uuid="bdbb1df8-a028-4685-9661-24563619eb80">
Nov 23 21:11:44 compute-1 nova_compute[230183]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 23 21:11:44 compute-1 nova_compute[230183]:     </nova:port>
Nov 23 21:11:44 compute-1 nova_compute[230183]:     <nova:port uuid="9852de9e-899c-4a7c-8268-07fee5003eac">
Nov 23 21:11:44 compute-1 nova_compute[230183]:       <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Nov 23 21:11:44 compute-1 nova_compute[230183]:     </nova:port>
Nov 23 21:11:44 compute-1 nova_compute[230183]:   </nova:ports>
Nov 23 21:11:44 compute-1 nova_compute[230183]: </nova:instance>
Nov 23 21:11:44 compute-1 nova_compute[230183]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.610 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[7bcfa517-d2b0-4e41-b3ea-7ad9f6c82846]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.614 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[d937d89c-b04b-4267-afad-01911657498c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.624 230187 DEBUG oslo_concurrency.lockutils [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "interface-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:11:44 compute-1 NetworkManager[49021]: <info>  [1763932304.6342] device (tapa53cafa8-a0): carrier: link connected
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.641 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb8c8e0-a573-4de4-9cd2-e501ae4646c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.657 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[eba859da-d21c-42aa-bff4-5d75306793e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa53cafa8-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:b5:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 425320, 'reachable_time': 27361, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237919, 'error': None, 'target': 'ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.670 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[7825ae70-91fe-499d-bf5a-f347cf28aad7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8f:b52b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 425320, 'tstamp': 425320}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237921, 'error': None, 'target': 'ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.687 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[6a0dbd36-150f-4fba-a273-93ca6cebe021]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa53cafa8-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:b5:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 425320, 'reachable_time': 27361, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 237922, 'error': None, 'target': 'ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.718 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[ff984d83-b34a-4d7a-83a5-4855eab7821d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.773 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[bbdd7cb4-ebe0-4b66-84e5-3f5cfff3d0de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.774 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa53cafa8-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.774 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.775 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa53cafa8-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.776 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:44 compute-1 NetworkManager[49021]: <info>  [1763932304.7773] manager: (tapa53cafa8-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Nov 23 21:11:44 compute-1 kernel: tapa53cafa8-a0: entered promiscuous mode
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.779 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.780 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa53cafa8-a0, col_values=(('external_ids', {'iface-id': 'cab0b4e0-79b2-41b3-92b4-7053f2aab9f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.781 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:44 compute-1 ovn_controller[132845]: 2025-11-23T21:11:44Z|00085|binding|INFO|Releasing lport cab0b4e0-79b2-41b3-92b4-7053f2aab9f8 from this chassis (sb_readonly=0)
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.784 142158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a53cafa8-a74e-467c-9117-a31bd6c650ae.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a53cafa8-a74e-467c-9117-a31bd6c650ae.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.785 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5366df-1dc4-4aa5-a718-d2241a83d58a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.786 142158 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: global
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]:     log         /dev/log local0 debug
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]:     log-tag     haproxy-metadata-proxy-a53cafa8-a74e-467c-9117-a31bd6c650ae
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]:     user        root
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]:     group       root
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]:     maxconn     1024
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]:     pidfile     /var/lib/neutron/external/pids/a53cafa8-a74e-467c-9117-a31bd6c650ae.pid.haproxy
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]:     daemon
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: defaults
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]:     log global
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]:     mode http
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]:     option httplog
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]:     option dontlognull
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]:     option http-server-close
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]:     option forwardfor
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]:     retries                 3
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]:     timeout http-request    30s
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]:     timeout connect         30s
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]:     timeout client          32s
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]:     timeout server          32s
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]:     timeout http-keep-alive 30s
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: listen listener
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]:     bind 169.254.169.254:80
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]:     server metadata /var/lib/neutron/metadata_proxy
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]:     http-request add-header X-OVN-Network-ID a53cafa8-a74e-467c-9117-a31bd6c650ae
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 23 21:11:44 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.787 142158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae', 'env', 'PROCESS_TAG=haproxy-a53cafa8-a74e-467c-9117-a31bd6c650ae', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a53cafa8-a74e-467c-9117-a31bd6c650ae.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.789 230187 DEBUG nova.compute.manager [req-9242f190-b0b1-4de5-a6f1-d86aac6e3676 req-2c751f8c-bbe0-4baf-abba-e99c91ebe50b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-vif-plugged-9852de9e-899c-4a7c-8268-07fee5003eac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.790 230187 DEBUG oslo_concurrency.lockutils [req-9242f190-b0b1-4de5-a6f1-d86aac6e3676 req-2c751f8c-bbe0-4baf-abba-e99c91ebe50b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.790 230187 DEBUG oslo_concurrency.lockutils [req-9242f190-b0b1-4de5-a6f1-d86aac6e3676 req-2c751f8c-bbe0-4baf-abba-e99c91ebe50b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.790 230187 DEBUG oslo_concurrency.lockutils [req-9242f190-b0b1-4de5-a6f1-d86aac6e3676 req-2c751f8c-bbe0-4baf-abba-e99c91ebe50b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.790 230187 DEBUG nova.compute.manager [req-9242f190-b0b1-4de5-a6f1-d86aac6e3676 req-2c751f8c-bbe0-4baf-abba-e99c91ebe50b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] No waiting events found dispatching network-vif-plugged-9852de9e-899c-4a7c-8268-07fee5003eac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.791 230187 WARNING nova.compute.manager [req-9242f190-b0b1-4de5-a6f1-d86aac6e3676 req-2c751f8c-bbe0-4baf-abba-e99c91ebe50b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received unexpected event network-vif-plugged-9852de9e-899c-4a7c-8268-07fee5003eac for instance with vm_state active and task_state None.
Nov 23 21:11:44 compute-1 nova_compute[230183]: 2025-11-23 21:11:44.793 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:44 compute-1 sudo[237845]: pam_unix(sudo:session): session closed for user root
Nov 23 21:11:45 compute-1 sudo[237949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:11:45 compute-1 sudo[237949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:11:45 compute-1 sudo[237949]: pam_unix(sudo:session): session closed for user root
Nov 23 21:11:45 compute-1 sudo[237975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 03808be8-ae4a-5548-82e6-4a294f1bc627 -- inventory --format=json-pretty --filter-for-batch
Nov 23 21:11:45 compute-1 sudo[237975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:11:45 compute-1 podman[238021]: 2025-11-23 21:11:45.148410135 +0000 UTC m=+0.047844037 container create 9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 23 21:11:45 compute-1 systemd[1]: Started libpod-conmon-9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed.scope.
Nov 23 21:11:45 compute-1 sudo[238032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:11:45 compute-1 sudo[238032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:11:45 compute-1 sudo[238032]: pam_unix(sudo:session): session closed for user root
Nov 23 21:11:45 compute-1 systemd[1]: Started libcrun container.
Nov 23 21:11:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0150f09f49afcae45b35871ed00a9581191e83bc7cd591edc409336857fd6c40/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 21:11:45 compute-1 podman[238021]: 2025-11-23 21:11:45.122322058 +0000 UTC m=+0.021755990 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 21:11:45 compute-1 podman[238021]: 2025-11-23 21:11:45.233045453 +0000 UTC m=+0.132479385 container init 9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 23 21:11:45 compute-1 podman[238021]: 2025-11-23 21:11:45.238712374 +0000 UTC m=+0.138146276 container start 9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 21:11:45 compute-1 neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae[238060]: [NOTICE]   (238065) : New worker (238068) forked
Nov 23 21:11:45 compute-1 neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae[238060]: [NOTICE]   (238065) : Loading success.
Nov 23 21:11:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:45.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:45 compute-1 podman[238114]: 2025-11-23 21:11:45.456246038 +0000 UTC m=+0.040560943 container create cd5f313a7f111491576389298826be6755e9af7846e774493a248c5c6c23d943 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 23 21:11:45 compute-1 systemd[1]: Started libpod-conmon-cd5f313a7f111491576389298826be6755e9af7846e774493a248c5c6c23d943.scope.
Nov 23 21:11:45 compute-1 systemd[1]: Started libcrun container.
Nov 23 21:11:45 compute-1 podman[238114]: 2025-11-23 21:11:45.438742271 +0000 UTC m=+0.023057196 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 21:11:45 compute-1 podman[238114]: 2025-11-23 21:11:45.536439698 +0000 UTC m=+0.120754623 container init cd5f313a7f111491576389298826be6755e9af7846e774493a248c5c6c23d943 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_zhukovsky, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 21:11:45 compute-1 podman[238114]: 2025-11-23 21:11:45.542735505 +0000 UTC m=+0.127050400 container start cd5f313a7f111491576389298826be6755e9af7846e774493a248c5c6c23d943 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_zhukovsky, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 21:11:45 compute-1 podman[238114]: 2025-11-23 21:11:45.547980365 +0000 UTC m=+0.132295280 container attach cd5f313a7f111491576389298826be6755e9af7846e774493a248c5c6c23d943 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_zhukovsky, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 21:11:45 compute-1 systemd[1]: libpod-cd5f313a7f111491576389298826be6755e9af7846e774493a248c5c6c23d943.scope: Deactivated successfully.
Nov 23 21:11:45 compute-1 relaxed_zhukovsky[238130]: 167 167
Nov 23 21:11:45 compute-1 podman[238114]: 2025-11-23 21:11:45.552011133 +0000 UTC m=+0.136326038 container died cd5f313a7f111491576389298826be6755e9af7846e774493a248c5c6c23d943 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_zhukovsky, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 21:11:45 compute-1 conmon[238130]: conmon cd5f313a7f1114915763 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cd5f313a7f111491576389298826be6755e9af7846e774493a248c5c6c23d943.scope/container/memory.events
Nov 23 21:11:45 compute-1 systemd[1]: var-lib-containers-storage-overlay-b28301d7c9dfa616731ec392365a7b57a1bf37d9fb07d297666f65ae4220d814-merged.mount: Deactivated successfully.
Nov 23 21:11:45 compute-1 podman[238114]: 2025-11-23 21:11:45.588723672 +0000 UTC m=+0.173038577 container remove cd5f313a7f111491576389298826be6755e9af7846e774493a248c5c6c23d943 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 21:11:45 compute-1 systemd[1]: libpod-conmon-cd5f313a7f111491576389298826be6755e9af7846e774493a248c5c6c23d943.scope: Deactivated successfully.
Nov 23 21:11:45 compute-1 ovn_controller[132845]: 2025-11-23T21:11:45Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1a:9a:cf 10.100.0.23
Nov 23 21:11:45 compute-1 ovn_controller[132845]: 2025-11-23T21:11:45Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1a:9a:cf 10.100.0.23
Nov 23 21:11:45 compute-1 podman[238155]: 2025-11-23 21:11:45.760607988 +0000 UTC m=+0.037247195 container create 49bcd82e77ba084fa23f2bb0d1b8100a6d664ccdfb6645b1441b2b64a16266d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 23 21:11:45 compute-1 nova_compute[230183]: 2025-11-23 21:11:45.777 230187 DEBUG nova.network.neutron [req-a9897480-a1b3-44d9-9dd5-baa198defd0b req-4d685653-4467-4056-9a73-768d72a6809e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updated VIF entry in instance network info cache for port 9852de9e-899c-4a7c-8268-07fee5003eac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 23 21:11:45 compute-1 nova_compute[230183]: 2025-11-23 21:11:45.777 230187 DEBUG nova.network.neutron [req-a9897480-a1b3-44d9-9dd5-baa198defd0b req-4d685653-4467-4056-9a73-768d72a6809e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updating instance_info_cache with network_info: [{"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:11:45 compute-1 nova_compute[230183]: 2025-11-23 21:11:45.789 230187 DEBUG oslo_concurrency.lockutils [req-a9897480-a1b3-44d9-9dd5-baa198defd0b req-4d685653-4467-4056-9a73-768d72a6809e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:11:45 compute-1 systemd[1]: Started libpod-conmon-49bcd82e77ba084fa23f2bb0d1b8100a6d664ccdfb6645b1441b2b64a16266d8.scope.
Nov 23 21:11:45 compute-1 systemd[1]: Started libcrun container.
Nov 23 21:11:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38862a5598e4fd0a274f5b5bec77424f08a2cea1ff5d04fa6f150097b5ed51d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 21:11:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38862a5598e4fd0a274f5b5bec77424f08a2cea1ff5d04fa6f150097b5ed51d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 21:11:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38862a5598e4fd0a274f5b5bec77424f08a2cea1ff5d04fa6f150097b5ed51d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 21:11:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38862a5598e4fd0a274f5b5bec77424f08a2cea1ff5d04fa6f150097b5ed51d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 21:11:45 compute-1 podman[238155]: 2025-11-23 21:11:45.822824158 +0000 UTC m=+0.099463375 container init 49bcd82e77ba084fa23f2bb0d1b8100a6d664ccdfb6645b1441b2b64a16266d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_diffie, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 21:11:45 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:11:45 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:11:45 compute-1 ceph-mon[80135]: pgmap v910: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 83 KiB/s wr, 12 op/s
Nov 23 21:11:45 compute-1 podman[238155]: 2025-11-23 21:11:45.83188261 +0000 UTC m=+0.108521817 container start 49bcd82e77ba084fa23f2bb0d1b8100a6d664ccdfb6645b1441b2b64a16266d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_diffie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 21:11:45 compute-1 podman[238155]: 2025-11-23 21:11:45.744623852 +0000 UTC m=+0.021263089 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 21:11:45 compute-1 podman[238155]: 2025-11-23 21:11:45.842535725 +0000 UTC m=+0.119174962 container attach 49bcd82e77ba084fa23f2bb0d1b8100a6d664ccdfb6645b1441b2b64a16266d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_diffie, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 23 21:11:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:46.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:46 compute-1 nifty_diffie[238171]: [
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:     {
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:         "available": false,
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:         "being_replaced": false,
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:         "ceph_device_lvm": false,
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:         "lsm_data": {},
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:         "lvs": [],
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:         "path": "/dev/sr0",
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:         "rejected_reasons": [
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:             "Insufficient space (<5GB)",
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:             "Has a FileSystem"
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:         ],
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:         "sys_api": {
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:             "actuators": null,
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:             "device_nodes": [
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:                 "sr0"
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:             ],
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:             "devname": "sr0",
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:             "human_readable_size": "482.00 KB",
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:             "id_bus": "ata",
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:             "model": "QEMU DVD-ROM",
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:             "nr_requests": "2",
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:             "parent": "/dev/sr0",
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:             "partitions": {},
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:             "path": "/dev/sr0",
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:             "removable": "1",
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:             "rev": "2.5+",
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:             "ro": "0",
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:             "rotational": "1",
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:             "sas_address": "",
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:             "sas_device_handle": "",
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:             "scheduler_mode": "mq-deadline",
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:             "sectors": 0,
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:             "sectorsize": "2048",
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:             "size": 493568.0,
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:             "support_discard": "2048",
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:             "type": "disk",
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:             "vendor": "QEMU"
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:         }
Nov 23 21:11:46 compute-1 nifty_diffie[238171]:     }
Nov 23 21:11:46 compute-1 nifty_diffie[238171]: ]
Nov 23 21:11:46 compute-1 systemd[1]: libpod-49bcd82e77ba084fa23f2bb0d1b8100a6d664ccdfb6645b1441b2b64a16266d8.scope: Deactivated successfully.
Nov 23 21:11:46 compute-1 podman[238155]: 2025-11-23 21:11:46.52224637 +0000 UTC m=+0.798885597 container died 49bcd82e77ba084fa23f2bb0d1b8100a6d664ccdfb6645b1441b2b64a16266d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_diffie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 23 21:11:46 compute-1 systemd[1]: var-lib-containers-storage-overlay-38862a5598e4fd0a274f5b5bec77424f08a2cea1ff5d04fa6f150097b5ed51d4-merged.mount: Deactivated successfully.
Nov 23 21:11:46 compute-1 podman[238155]: 2025-11-23 21:11:46.565576926 +0000 UTC m=+0.842216133 container remove 49bcd82e77ba084fa23f2bb0d1b8100a6d664ccdfb6645b1441b2b64a16266d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_diffie, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 21:11:46 compute-1 systemd[1]: libpod-conmon-49bcd82e77ba084fa23f2bb0d1b8100a6d664ccdfb6645b1441b2b64a16266d8.scope: Deactivated successfully.
Nov 23 21:11:46 compute-1 sudo[237975]: pam_unix(sudo:session): session closed for user root
Nov 23 21:11:46 compute-1 nova_compute[230183]: 2025-11-23 21:11:46.875 230187 DEBUG nova.compute.manager [req-ac1671b5-1ec7-4774-9b8e-0d920a589520 req-7dc3a174-aa0c-4bc1-b841-c538e80cb0b9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-vif-plugged-9852de9e-899c-4a7c-8268-07fee5003eac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:11:46 compute-1 nova_compute[230183]: 2025-11-23 21:11:46.876 230187 DEBUG oslo_concurrency.lockutils [req-ac1671b5-1ec7-4774-9b8e-0d920a589520 req-7dc3a174-aa0c-4bc1-b841-c538e80cb0b9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:11:46 compute-1 nova_compute[230183]: 2025-11-23 21:11:46.877 230187 DEBUG oslo_concurrency.lockutils [req-ac1671b5-1ec7-4774-9b8e-0d920a589520 req-7dc3a174-aa0c-4bc1-b841-c538e80cb0b9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:11:46 compute-1 nova_compute[230183]: 2025-11-23 21:11:46.877 230187 DEBUG oslo_concurrency.lockutils [req-ac1671b5-1ec7-4774-9b8e-0d920a589520 req-7dc3a174-aa0c-4bc1-b841-c538e80cb0b9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:11:46 compute-1 nova_compute[230183]: 2025-11-23 21:11:46.877 230187 DEBUG nova.compute.manager [req-ac1671b5-1ec7-4774-9b8e-0d920a589520 req-7dc3a174-aa0c-4bc1-b841-c538e80cb0b9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] No waiting events found dispatching network-vif-plugged-9852de9e-899c-4a7c-8268-07fee5003eac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:11:46 compute-1 nova_compute[230183]: 2025-11-23 21:11:46.877 230187 WARNING nova.compute.manager [req-ac1671b5-1ec7-4774-9b8e-0d920a589520 req-7dc3a174-aa0c-4bc1-b841-c538e80cb0b9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received unexpected event network-vif-plugged-9852de9e-899c-4a7c-8268-07fee5003eac for instance with vm_state active and task_state None.
Nov 23 21:11:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:11:47 compute-1 nova_compute[230183]: 2025-11-23 21:11:47.273 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:11:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:47.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:11:47 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:11:47 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:11:47 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:11:47 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:11:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:48.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:48 compute-1 ceph-mon[80135]: pgmap v911: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 14 KiB/s wr, 1 op/s
Nov 23 21:11:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:11:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:11:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:11:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:11:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 21:11:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:11:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:11:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 21:11:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 21:11:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:11:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:49.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:49 compute-1 nova_compute[230183]: 2025-11-23 21:11:49.444 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:49 compute-1 ceph-mon[80135]: pgmap v912: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 19 KiB/s wr, 2 op/s
Nov 23 21:11:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:11:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:50.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:11:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:51.067 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:11:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:51.068 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:11:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:11:51.068 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:11:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:11:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:51.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:11:52 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:11:52 compute-1 nova_compute[230183]: 2025-11-23 21:11:52.274 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:52 compute-1 ceph-mon[80135]: pgmap v913: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 8.7 KiB/s wr, 1 op/s
Nov 23 21:11:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:52.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:52 compute-1 podman[239397]: 2025-11-23 21:11:52.661035597 +0000 UTC m=+0.077045727 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 21:11:52 compute-1 podman[239396]: 2025-11-23 21:11:52.661100199 +0000 UTC m=+0.077001936 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true)
Nov 23 21:11:53 compute-1 sudo[239439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 21:11:53 compute-1 sudo[239439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:11:53 compute-1 sudo[239439]: pam_unix(sudo:session): session closed for user root
Nov 23 21:11:53 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/759690542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:11:53 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:11:53 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:11:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:11:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:53.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:11:54 compute-1 ceph-mon[80135]: pgmap v914: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 7.3 KiB/s wr, 1 op/s
Nov 23 21:11:54 compute-1 nova_compute[230183]: 2025-11-23 21:11:54.446 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:54.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:55.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:55 compute-1 nova_compute[230183]: 2025-11-23 21:11:55.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:11:56 compute-1 ceph-mon[80135]: pgmap v915: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 21:11:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:56.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:11:57 compute-1 nova_compute[230183]: 2025-11-23 21:11:57.276 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:11:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:11:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:57.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:11:57 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1825239050' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:11:57 compute-1 podman[239469]: 2025-11-23 21:11:57.649910434 +0000 UTC m=+0.067697767 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 23 21:11:58 compute-1 ceph-mon[80135]: pgmap v916: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 21:11:58 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1956243388' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:11:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:58.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:11:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:11:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:59.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:11:59 compute-1 nova_compute[230183]: 2025-11-23 21:11:59.436 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:11:59 compute-1 nova_compute[230183]: 2025-11-23 21:11:59.449 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:12:00 compute-1 ceph-mon[80135]: pgmap v917: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 21:12:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:00.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:01.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:01 compute-1 nova_compute[230183]: 2025-11-23 21:12:01.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:12:01 compute-1 nova_compute[230183]: 2025-11-23 21:12:01.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 23 21:12:01 compute-1 nova_compute[230183]: 2025-11-23 21:12:01.441 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 23 21:12:01 compute-1 nova_compute[230183]: 2025-11-23 21:12:01.442 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:12:01 compute-1 nova_compute[230183]: 2025-11-23 21:12:01.442 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 23 21:12:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:12:02 compute-1 nova_compute[230183]: 2025-11-23 21:12:02.279 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:12:02 compute-1 ceph-mon[80135]: pgmap v918: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Nov 23 21:12:02 compute-1 nova_compute[230183]: 2025-11-23 21:12:02.449 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:12:02 compute-1 nova_compute[230183]: 2025-11-23 21:12:02.471 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:12:02 compute-1 nova_compute[230183]: 2025-11-23 21:12:02.472 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:12:02 compute-1 nova_compute[230183]: 2025-11-23 21:12:02.472 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:12:02 compute-1 nova_compute[230183]: 2025-11-23 21:12:02.472 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:12:02 compute-1 nova_compute[230183]: 2025-11-23 21:12:02.472 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:12:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:02.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:12:02 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1966353478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:12:02 compute-1 nova_compute[230183]: 2025-11-23 21:12:02.924 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:12:02 compute-1 nova_compute[230183]: 2025-11-23 21:12:02.978 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 23 21:12:02 compute-1 nova_compute[230183]: 2025-11-23 21:12:02.978 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 23 21:12:03 compute-1 nova_compute[230183]: 2025-11-23 21:12:03.128 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:12:03 compute-1 nova_compute[230183]: 2025-11-23 21:12:03.129 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4728MB free_disk=59.92177200317383GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:12:03 compute-1 nova_compute[230183]: 2025-11-23 21:12:03.129 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:12:03 compute-1 nova_compute[230183]: 2025-11-23 21:12:03.130 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:12:03 compute-1 nova_compute[230183]: 2025-11-23 21:12:03.270 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Instance 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 23 21:12:03 compute-1 nova_compute[230183]: 2025-11-23 21:12:03.271 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:12:03 compute-1 nova_compute[230183]: 2025-11-23 21:12:03.271 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:12:03 compute-1 nova_compute[230183]: 2025-11-23 21:12:03.375 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing inventories for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 23 21:12:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:03.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:03 compute-1 nova_compute[230183]: 2025-11-23 21:12:03.433 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updating ProviderTree inventory for provider bb217351-d4c8-44a4-9137-08393a1f72bc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 23 21:12:03 compute-1 nova_compute[230183]: 2025-11-23 21:12:03.434 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updating inventory in ProviderTree for provider bb217351-d4c8-44a4-9137-08393a1f72bc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 23 21:12:03 compute-1 nova_compute[230183]: 2025-11-23 21:12:03.447 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing aggregate associations for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 23 21:12:03 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1966353478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:12:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:12:03 compute-1 nova_compute[230183]: 2025-11-23 21:12:03.470 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing trait associations for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc, traits: COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,HW_CPU_X86_AVX,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SVM,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 23 21:12:03 compute-1 nova_compute[230183]: 2025-11-23 21:12:03.510 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:12:03 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:12:03 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3949694408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:12:03 compute-1 nova_compute[230183]: 2025-11-23 21:12:03.919 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:12:03 compute-1 nova_compute[230183]: 2025-11-23 21:12:03.925 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:12:03 compute-1 nova_compute[230183]: 2025-11-23 21:12:03.938 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:12:03 compute-1 nova_compute[230183]: 2025-11-23 21:12:03.965 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 23 21:12:03 compute-1 nova_compute[230183]: 2025-11-23 21:12:03.966 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:12:04 compute-1 nova_compute[230183]: 2025-11-23 21:12:04.475 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:12:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:12:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:04.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:12:04 compute-1 ceph-mon[80135]: pgmap v919: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 23 21:12:04 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3949694408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:12:04 compute-1 nova_compute[230183]: 2025-11-23 21:12:04.939 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:12:04 compute-1 nova_compute[230183]: 2025-11-23 21:12:04.940 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:12:04 compute-1 nova_compute[230183]: 2025-11-23 21:12:04.940 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 23 21:12:04 compute-1 nova_compute[230183]: 2025-11-23 21:12:04.940 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 23 21:12:05 compute-1 nova_compute[230183]: 2025-11-23 21:12:05.224 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:12:05 compute-1 nova_compute[230183]: 2025-11-23 21:12:05.224 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquired lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:12:05 compute-1 nova_compute[230183]: 2025-11-23 21:12:05.224 230187 DEBUG nova.network.neutron [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 23 21:12:05 compute-1 nova_compute[230183]: 2025-11-23 21:12:05.225 230187 DEBUG nova.objects.instance [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:12:05 compute-1 sudo[239537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:12:05 compute-1 sudo[239537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:12:05 compute-1 sudo[239537]: pam_unix(sudo:session): session closed for user root
Nov 23 21:12:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:12:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:05.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:12:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:06.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:06 compute-1 ceph-mon[80135]: pgmap v920: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 23 21:12:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:12:07 compute-1 nova_compute[230183]: 2025-11-23 21:12:07.282 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:12:07 compute-1 nova_compute[230183]: 2025-11-23 21:12:07.380 230187 DEBUG nova.network.neutron [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updating instance_info_cache with network_info: [{"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:12:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:07.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:07 compute-1 nova_compute[230183]: 2025-11-23 21:12:07.399 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Releasing lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:12:07 compute-1 nova_compute[230183]: 2025-11-23 21:12:07.399 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 23 21:12:07 compute-1 nova_compute[230183]: 2025-11-23 21:12:07.400 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:12:07 compute-1 nova_compute[230183]: 2025-11-23 21:12:07.401 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:12:07 compute-1 nova_compute[230183]: 2025-11-23 21:12:07.401 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:12:07 compute-1 nova_compute[230183]: 2025-11-23 21:12:07.402 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:12:07 compute-1 nova_compute[230183]: 2025-11-23 21:12:07.402 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:12:07 compute-1 nova_compute[230183]: 2025-11-23 21:12:07.402 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 23 21:12:07 compute-1 nova_compute[230183]: 2025-11-23 21:12:07.885 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:12:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:08.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:08 compute-1 nova_compute[230183]: 2025-11-23 21:12:08.571 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:12:08 compute-1 nova_compute[230183]: 2025-11-23 21:12:08.588 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Triggering sync for uuid 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 23 21:12:08 compute-1 nova_compute[230183]: 2025-11-23 21:12:08.589 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:12:08 compute-1 nova_compute[230183]: 2025-11-23 21:12:08.590 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:12:08 compute-1 nova_compute[230183]: 2025-11-23 21:12:08.624 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:12:08 compute-1 ceph-mon[80135]: pgmap v921: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Nov 23 21:12:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/4168597009' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:12:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/4168597009' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:12:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/497318453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:12:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:12:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:09.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:12:09 compute-1 nova_compute[230183]: 2025-11-23 21:12:09.478 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:12:09 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1749248425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:12:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:10.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:10 compute-1 ceph-mon[80135]: pgmap v922: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 75 op/s
Nov 23 21:12:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:11.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:11 compute-1 ceph-mon[80135]: pgmap v923: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 75 op/s
Nov 23 21:12:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:12:12 compute-1 nova_compute[230183]: 2025-11-23 21:12:12.286 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:12:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:12:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:12.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:12:12 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2127285944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:12:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:13.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:13 compute-1 ceph-mon[80135]: pgmap v924: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.3 KiB/s wr, 67 op/s
Nov 23 21:12:13 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/855298423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:12:14 compute-1 nova_compute[230183]: 2025-11-23 21:12:14.482 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:12:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:14.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:12:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:15.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:12:16 compute-1 ceph-mon[80135]: pgmap v925: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Nov 23 21:12:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:16.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:12:17 compute-1 nova_compute[230183]: 2025-11-23 21:12:17.287 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:12:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:17.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:18 compute-1 ceph-mon[80135]: pgmap v926: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 23 21:12:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:12:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:18.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:19.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:19 compute-1 nova_compute[230183]: 2025-11-23 21:12:19.527 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:12:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:20 compute-1 ceph-mon[80135]: pgmap v927: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 23 21:12:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:20.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:21.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:21 compute-1 nova_compute[230183]: 2025-11-23 21:12:21.712 230187 DEBUG nova.compute.manager [req-4c1f441d-d75a-4227-b72f-8a3849ce6944 req-276eebf3-1b82-4bcd-8e53-9dd2b02a02ef 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-changed-9852de9e-899c-4a7c-8268-07fee5003eac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:12:21 compute-1 nova_compute[230183]: 2025-11-23 21:12:21.713 230187 DEBUG nova.compute.manager [req-4c1f441d-d75a-4227-b72f-8a3849ce6944 req-276eebf3-1b82-4bcd-8e53-9dd2b02a02ef 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Refreshing instance network info cache due to event network-changed-9852de9e-899c-4a7c-8268-07fee5003eac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 23 21:12:21 compute-1 nova_compute[230183]: 2025-11-23 21:12:21.713 230187 DEBUG oslo_concurrency.lockutils [req-4c1f441d-d75a-4227-b72f-8a3849ce6944 req-276eebf3-1b82-4bcd-8e53-9dd2b02a02ef 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:12:21 compute-1 nova_compute[230183]: 2025-11-23 21:12:21.713 230187 DEBUG oslo_concurrency.lockutils [req-4c1f441d-d75a-4227-b72f-8a3849ce6944 req-276eebf3-1b82-4bcd-8e53-9dd2b02a02ef 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:12:21 compute-1 nova_compute[230183]: 2025-11-23 21:12:21.714 230187 DEBUG nova.network.neutron [req-4c1f441d-d75a-4227-b72f-8a3849ce6944 req-276eebf3-1b82-4bcd-8e53-9dd2b02a02ef 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Refreshing network info cache for port 9852de9e-899c-4a7c-8268-07fee5003eac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 23 21:12:22 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:12:22 compute-1 nova_compute[230183]: 2025-11-23 21:12:22.291 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:12:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:12:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:22.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:12:22 compute-1 ceph-mon[80135]: pgmap v928: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 23 21:12:23 compute-1 nova_compute[230183]: 2025-11-23 21:12:23.224 230187 DEBUG nova.network.neutron [req-4c1f441d-d75a-4227-b72f-8a3849ce6944 req-276eebf3-1b82-4bcd-8e53-9dd2b02a02ef 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updated VIF entry in instance network info cache for port 9852de9e-899c-4a7c-8268-07fee5003eac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 23 21:12:23 compute-1 nova_compute[230183]: 2025-11-23 21:12:23.225 230187 DEBUG nova.network.neutron [req-4c1f441d-d75a-4227-b72f-8a3849ce6944 req-276eebf3-1b82-4bcd-8e53-9dd2b02a02ef 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updating instance_info_cache with network_info: [{"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:12:23 compute-1 nova_compute[230183]: 2025-11-23 21:12:23.242 230187 DEBUG oslo_concurrency.lockutils [req-4c1f441d-d75a-4227-b72f-8a3849ce6944 req-276eebf3-1b82-4bcd-8e53-9dd2b02a02ef 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:12:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:23.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:23 compute-1 podman[239573]: 2025-11-23 21:12:23.640386346 +0000 UTC m=+0.043578454 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 21:12:23 compute-1 podman[239572]: 2025-11-23 21:12:23.698682372 +0000 UTC m=+0.104107350 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 21:12:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:24.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:24 compute-1 ceph-mon[80135]: pgmap v929: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 23 21:12:24 compute-1 nova_compute[230183]: 2025-11-23 21:12:24.529 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:12:25 compute-1 sudo[239617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:12:25 compute-1 sudo[239617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:12:25 compute-1 sudo[239617]: pam_unix(sudo:session): session closed for user root
Nov 23 21:12:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:12:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:25.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:12:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:26.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:26 compute-1 ceph-mon[80135]: pgmap v930: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 23 21:12:27 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:12:27 compute-1 nova_compute[230183]: 2025-11-23 21:12:27.295 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:12:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:27.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:28.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:28 compute-1 ceph-mon[80135]: pgmap v931: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 1 op/s
Nov 23 21:12:28 compute-1 podman[239644]: 2025-11-23 21:12:28.636560328 +0000 UTC m=+0.056589251 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 23 21:12:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:29.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:29 compute-1 nova_compute[230183]: 2025-11-23 21:12:29.532 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:12:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:30.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:30 compute-1 ceph-mon[80135]: pgmap v932: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 17 KiB/s wr, 2 op/s
Nov 23 21:12:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:31.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:32 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:12:32 compute-1 nova_compute[230183]: 2025-11-23 21:12:32.298 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:12:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:32.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:32 compute-1 ceph-mon[80135]: pgmap v933: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 1 op/s
Nov 23 21:12:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/211232 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 21:12:32 compute-1 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [ALERT] 326/211232 (4) : backend 'backend' has no server available!
Nov 23 21:12:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:12:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:33.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:12:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:12:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:34.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:34 compute-1 nova_compute[230183]: 2025-11-23 21:12:34.576 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:12:34 compute-1 ceph-mon[80135]: pgmap v934: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 4.0 KiB/s wr, 1 op/s
Nov 23 21:12:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:35.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:36.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:36 compute-1 sshd-session[239669]: Invalid user ubuntu from 92.118.39.92 port 57764
Nov 23 21:12:36 compute-1 ceph-mon[80135]: pgmap v935: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 15 KiB/s wr, 3 op/s
Nov 23 21:12:36 compute-1 sshd-session[239669]: Connection closed by invalid user ubuntu 92.118.39.92 port 57764 [preauth]
Nov 23 21:12:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:12:37 compute-1 nova_compute[230183]: 2025-11-23 21:12:37.300 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:12:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:37.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:12:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:38.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:12:38 compute-1 ceph-mon[80135]: pgmap v936: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 2 op/s
Nov 23 21:12:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:39.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:39 compute-1 nova_compute[230183]: 2025-11-23 21:12:39.578 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:12:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:40.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:40 compute-1 ceph-mon[80135]: pgmap v937: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 12 KiB/s wr, 3 op/s
Nov 23 21:12:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:41.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:12:42 compute-1 nova_compute[230183]: 2025-11-23 21:12:42.303 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:12:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:42.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:42 compute-1 ceph-mon[80135]: pgmap v938: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 2 op/s
Nov 23 21:12:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:43.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:44.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:44 compute-1 nova_compute[230183]: 2025-11-23 21:12:44.580 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:12:44 compute-1 ceph-mon[80135]: pgmap v939: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 2 op/s
Nov 23 21:12:45 compute-1 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 23 21:12:45 compute-1 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Nov 23 21:12:45 compute-1 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 23 21:12:45 compute-1 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 23 21:12:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:45.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:45 compute-1 sudo[239675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:12:45 compute-1 sudo[239675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:12:45 compute-1 sudo[239675]: pam_unix(sudo:session): session closed for user root
Nov 23 21:12:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:46.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:46 compute-1 ceph-mon[80135]: pgmap v940: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 16 KiB/s wr, 4 op/s
Nov 23 21:12:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:12:47 compute-1 nova_compute[230183]: 2025-11-23 21:12:47.305 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:12:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:47.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:48.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:48 compute-1 ceph-mon[80135]: pgmap v941: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 4.8 KiB/s wr, 2 op/s
Nov 23 21:12:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:12:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:49.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:49 compute-1 nova_compute[230183]: 2025-11-23 21:12:49.583 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:12:49 compute-1 ceph-mon[80135]: pgmap v942: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 7.2 KiB/s wr, 50 op/s
Nov 23 21:12:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:50.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:12:51.069 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:12:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:12:51.069 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:12:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:12:51.070 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:12:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:51.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:51 compute-1 ovn_controller[132845]: 2025-11-23T21:12:51Z|00086|memory_trim|INFO|Detected inactivity (last active 30017 ms ago): trimming memory
Nov 23 21:12:52 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:12:52 compute-1 nova_compute[230183]: 2025-11-23 21:12:52.307 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:12:52 compute-1 ceph-mon[80135]: pgmap v943: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 8.1 KiB/s wr, 175 op/s
Nov 23 21:12:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:12:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:52.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:12:53 compute-1 sudo[239704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:12:53 compute-1 sudo[239704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:12:53 compute-1 sudo[239704]: pam_unix(sudo:session): session closed for user root
Nov 23 21:12:53 compute-1 sudo[239729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 21:12:53 compute-1 sudo[239729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:12:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:12:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:53.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:12:54 compute-1 sudo[239729]: pam_unix(sudo:session): session closed for user root
Nov 23 21:12:54 compute-1 ceph-mon[80135]: pgmap v944: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 6.7 KiB/s wr, 175 op/s
Nov 23 21:12:54 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:12:54 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 21:12:54 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:12:54 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:12:54 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 21:12:54 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 21:12:54 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:12:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:54.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:54 compute-1 nova_compute[230183]: 2025-11-23 21:12:54.585 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:12:54 compute-1 podman[239788]: 2025-11-23 21:12:54.676586044 +0000 UTC m=+0.062914420 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 21:12:54 compute-1 podman[239787]: 2025-11-23 21:12:54.699562677 +0000 UTC m=+0.095273513 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 21:12:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:55.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:12:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:56.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:12:56 compute-1 ceph-mon[80135]: pgmap v945: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 8.1 KiB/s wr, 175 op/s
Nov 23 21:12:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:12:57 compute-1 nova_compute[230183]: 2025-11-23 21:12:57.309 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:12:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:12:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:57.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:12:57 compute-1 ceph-mon[80135]: pgmap v946: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 4.7 KiB/s wr, 174 op/s
Nov 23 21:12:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:58.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:12:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:12:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:59.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:12:59 compute-1 nova_compute[230183]: 2025-11-23 21:12:59.616 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:12:59 compute-1 podman[239834]: 2025-11-23 21:12:59.641182082 +0000 UTC m=+0.061693347 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 23 21:13:00 compute-1 sudo[239854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 21:13:00 compute-1 sudo[239854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:13:00 compute-1 sudo[239854]: pam_unix(sudo:session): session closed for user root
Nov 23 21:13:00 compute-1 ceph-mon[80135]: pgmap v947: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 14 KiB/s wr, 176 op/s
Nov 23 21:13:00 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:13:00 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:13:00 compute-1 nova_compute[230183]: 2025-11-23 21:13:00.446 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:13:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:00.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:01.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:13:02 compute-1 nova_compute[230183]: 2025-11-23 21:13:02.312 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:02 compute-1 ceph-mon[80135]: pgmap v948: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 12 KiB/s wr, 128 op/s
Nov 23 21:13:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:13:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:02.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:13:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:13:03 compute-1 nova_compute[230183]: 2025-11-23 21:13:03.423 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:13:03 compute-1 nova_compute[230183]: 2025-11-23 21:13:03.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:13:03 compute-1 nova_compute[230183]: 2025-11-23 21:13:03.445 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:13:03 compute-1 nova_compute[230183]: 2025-11-23 21:13:03.447 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:13:03 compute-1 nova_compute[230183]: 2025-11-23 21:13:03.447 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:13:03 compute-1 nova_compute[230183]: 2025-11-23 21:13:03.448 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:13:03 compute-1 nova_compute[230183]: 2025-11-23 21:13:03.448 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:13:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:03.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:03 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:13:03 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4016648780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:13:03 compute-1 nova_compute[230183]: 2025-11-23 21:13:03.878 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:13:03 compute-1 nova_compute[230183]: 2025-11-23 21:13:03.959 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 23 21:13:03 compute-1 nova_compute[230183]: 2025-11-23 21:13:03.960 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 23 21:13:04 compute-1 nova_compute[230183]: 2025-11-23 21:13:04.104 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:13:04 compute-1 nova_compute[230183]: 2025-11-23 21:13:04.105 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4725MB free_disk=59.896949768066406GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:13:04 compute-1 nova_compute[230183]: 2025-11-23 21:13:04.105 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:13:04 compute-1 nova_compute[230183]: 2025-11-23 21:13:04.105 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:13:04 compute-1 nova_compute[230183]: 2025-11-23 21:13:04.159 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Instance 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 23 21:13:04 compute-1 nova_compute[230183]: 2025-11-23 21:13:04.159 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:13:04 compute-1 nova_compute[230183]: 2025-11-23 21:13:04.159 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:13:04 compute-1 nova_compute[230183]: 2025-11-23 21:13:04.187 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:13:04 compute-1 ceph-mon[80135]: pgmap v949: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 11 KiB/s wr, 2 op/s
Nov 23 21:13:04 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/4016648780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:13:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:04.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:04 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:13:04 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3454510770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:13:04 compute-1 nova_compute[230183]: 2025-11-23 21:13:04.616 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:13:04 compute-1 nova_compute[230183]: 2025-11-23 21:13:04.618 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:04 compute-1 nova_compute[230183]: 2025-11-23 21:13:04.622 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:13:04 compute-1 nova_compute[230183]: 2025-11-23 21:13:04.634 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:13:04 compute-1 nova_compute[230183]: 2025-11-23 21:13:04.636 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 23 21:13:04 compute-1 nova_compute[230183]: 2025-11-23 21:13:04.636 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:13:05 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3454510770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:13:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:05.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:05 compute-1 sudo[239926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:13:05 compute-1 sudo[239926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:13:05 compute-1 sudo[239926]: pam_unix(sudo:session): session closed for user root
Nov 23 21:13:05 compute-1 nova_compute[230183]: 2025-11-23 21:13:05.637 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:13:05 compute-1 nova_compute[230183]: 2025-11-23 21:13:05.637 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 23 21:13:05 compute-1 nova_compute[230183]: 2025-11-23 21:13:05.637 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 23 21:13:06 compute-1 nova_compute[230183]: 2025-11-23 21:13:06.022 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:13:06 compute-1 nova_compute[230183]: 2025-11-23 21:13:06.023 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquired lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:13:06 compute-1 nova_compute[230183]: 2025-11-23 21:13:06.023 230187 DEBUG nova.network.neutron [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 23 21:13:06 compute-1 nova_compute[230183]: 2025-11-23 21:13:06.023 230187 DEBUG nova.objects.instance [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:13:06 compute-1 ceph-mon[80135]: pgmap v950: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 13 KiB/s wr, 3 op/s
Nov 23 21:13:06 compute-1 nova_compute[230183]: 2025-11-23 21:13:06.488 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:06 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:06.489 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:13:06 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:06.490 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 23 21:13:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:06.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:13:07 compute-1 nova_compute[230183]: 2025-11-23 21:13:07.314 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:07.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:08 compute-1 ceph-mon[80135]: pgmap v951: 337 pgs: 337 active+clean; 200 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 2 op/s
Nov 23 21:13:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/2656138691' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:13:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/2656138691' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:13:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:13:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:08.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:13:08 compute-1 nova_compute[230183]: 2025-11-23 21:13:08.826 230187 DEBUG nova.network.neutron [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updating instance_info_cache with network_info: [{"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:13:08 compute-1 nova_compute[230183]: 2025-11-23 21:13:08.839 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Releasing lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:13:08 compute-1 nova_compute[230183]: 2025-11-23 21:13:08.839 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 23 21:13:08 compute-1 nova_compute[230183]: 2025-11-23 21:13:08.840 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:13:08 compute-1 nova_compute[230183]: 2025-11-23 21:13:08.840 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:13:08 compute-1 nova_compute[230183]: 2025-11-23 21:13:08.840 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:13:08 compute-1 nova_compute[230183]: 2025-11-23 21:13:08.840 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:13:08 compute-1 nova_compute[230183]: 2025-11-23 21:13:08.841 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:13:08 compute-1 nova_compute[230183]: 2025-11-23 21:13:08.841 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 23 21:13:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:09.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:09 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/476045377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:13:09 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1740834039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.620 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.754 230187 DEBUG oslo_concurrency.lockutils [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "interface-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-9852de9e-899c-4a7c-8268-07fee5003eac" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.754 230187 DEBUG oslo_concurrency.lockutils [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "interface-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-9852de9e-899c-4a7c-8268-07fee5003eac" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.765 230187 DEBUG nova.objects.instance [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'flavor' on Instance uuid 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.781 230187 DEBUG nova.virt.libvirt.vif [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:11:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1210792474',display_name='tempest-TestNetworkBasicOps-server-1210792474',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1210792474',id=6,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC9I5o3FOJoMlLS5RVHvg4JB6VMA0TLpRAHrRWOuj73hgQ5knZWkP8wznWff+IF5v3eA9GQgz9kKnWlcz54pfIskwjEMQ8tpar2NP2dJjbFuASygJ+AuXJaTUib24SH0fw==',key_name='tempest-TestNetworkBasicOps-192906804',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:11:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-jk4nm00m',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:11:16Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=4bac23b8-7bcd-4f5e-89a8-b035a16ffe36,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.781 230187 DEBUG nova.network.os_vif_util [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.782 230187 DEBUG nova.network.os_vif_util [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:9a:cf,bridge_name='br-int',has_traffic_filtering=True,id=9852de9e-899c-4a7c-8268-07fee5003eac,network=Network(a53cafa8-a74e-467c-9117-a31bd6c650ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9852de9e-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.784 230187 DEBUG nova.virt.libvirt.guest [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1a:9a:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9852de9e-89"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.786 230187 DEBUG nova.virt.libvirt.guest [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1a:9a:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9852de9e-89"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.787 230187 DEBUG nova.virt.libvirt.driver [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Attempting to detach device tap9852de9e-89 from instance 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.788 230187 DEBUG nova.virt.libvirt.guest [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] detach device xml: <interface type="ethernet">
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <mac address="fa:16:3e:1a:9a:cf"/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <model type="virtio"/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <driver name="vhost" rx_queue_size="512"/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <mtu size="1442"/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <target dev="tap9852de9e-89"/>
Nov 23 21:13:09 compute-1 nova_compute[230183]: </interface>
Nov 23 21:13:09 compute-1 nova_compute[230183]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.795 230187 DEBUG nova.virt.libvirt.guest [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1a:9a:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9852de9e-89"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.797 230187 DEBUG nova.virt.libvirt.guest [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1a:9a:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9852de9e-89"/></interface>not found in domain: <domain type='kvm' id='4'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <name>instance-00000006</name>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <uuid>4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</uuid>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <metadata>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <nova:name>tempest-TestNetworkBasicOps-server-1210792474</nova:name>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <nova:creationTime>2025-11-23 21:11:44</nova:creationTime>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <nova:flavor name="m1.nano">
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:memory>128</nova:memory>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:disk>1</nova:disk>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:swap>0</nova:swap>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:ephemeral>0</nova:ephemeral>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:vcpus>1</nova:vcpus>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </nova:flavor>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <nova:owner>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </nova:owner>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <nova:ports>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:port uuid="bdbb1df8-a028-4685-9661-24563619eb80">
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </nova:port>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:port uuid="9852de9e-899c-4a7c-8268-07fee5003eac">
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </nova:port>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </nova:ports>
Nov 23 21:13:09 compute-1 nova_compute[230183]: </nova:instance>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </metadata>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <memory unit='KiB'>131072</memory>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <vcpu placement='static'>1</vcpu>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <resource>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <partition>/machine</partition>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </resource>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <sysinfo type='smbios'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <system>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <entry name='manufacturer'>RDO</entry>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <entry name='product'>OpenStack Compute</entry>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <entry name='serial'>4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</entry>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <entry name='uuid'>4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</entry>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <entry name='family'>Virtual Machine</entry>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </system>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </sysinfo>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <os>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <boot dev='hd'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <smbios mode='sysinfo'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </os>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <features>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <acpi/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <apic/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <vmcoreinfo state='on'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </features>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <cpu mode='custom' match='exact' check='full'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <vendor>AMD</vendor>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='x2apic'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='tsc-deadline'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='hypervisor'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='tsc_adjust'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='spec-ctrl'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='stibp'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='ssbd'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='cmp_legacy'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='overflow-recov'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='succor'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='ibrs'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='amd-ssbd'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='virt-ssbd'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='disable' name='lbrv'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='disable' name='tsc-scale'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='disable' name='vmcb-clean'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='disable' name='flushbyasid'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='disable' name='pause-filter'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='disable' name='pfthreshold'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='disable' name='xsaves'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='disable' name='svm'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='topoext'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='disable' name='npt'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='disable' name='nrip-save'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </cpu>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <clock offset='utc'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <timer name='pit' tickpolicy='delay'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <timer name='hpet' present='no'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </clock>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <on_poweroff>destroy</on_poweroff>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <on_reboot>restart</on_reboot>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <on_crash>destroy</on_crash>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <devices>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <disk type='network' device='disk'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <driver name='qemu' type='raw' cache='none'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <auth username='openstack'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:         <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <source protocol='rbd' name='vms/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk' index='2'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:         <host name='192.168.122.100' port='6789'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:         <host name='192.168.122.102' port='6789'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:         <host name='192.168.122.101' port='6789'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       </source>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target dev='vda' bus='virtio'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='virtio-disk0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <disk type='network' device='cdrom'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <driver name='qemu' type='raw' cache='none'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <auth username='openstack'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:         <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <source protocol='rbd' name='vms/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk.config' index='1'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:         <host name='192.168.122.100' port='6789'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:         <host name='192.168.122.102' port='6789'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:         <host name='192.168.122.101' port='6789'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       </source>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target dev='sda' bus='sata'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <readonly/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='sata0-0-0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='0' model='pcie-root'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pcie.0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='1' port='0x10'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.1'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='2' port='0x11'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.2'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='3' port='0x12'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.3'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='4' port='0x13'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.4'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='5' port='0x14'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.5'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='6' port='0x15'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.6'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='7' port='0x16'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.7'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='8' port='0x17'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.8'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='9' port='0x18'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.9'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='10' port='0x19'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.10'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='11' port='0x1a'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.11'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='12' port='0x1b'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.12'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='13' port='0x1c'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.13'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='14' port='0x1d'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.14'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='15' port='0x1e'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.15'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='16' port='0x1f'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.16'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='17' port='0x20'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.17'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='18' port='0x21'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.18'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='19' port='0x22'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.19'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='20' port='0x23'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.20'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='21' port='0x24'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.21'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='22' port='0x25'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.22'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='23' port='0x26'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.23'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='24' port='0x27'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.24'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='25' port='0x28'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.25'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-pci-bridge'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.26'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='usb'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='sata' index='0'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='ide'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <interface type='ethernet'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <mac address='fa:16:3e:f3:c9:f4'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target dev='tapbdbb1df8-a0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model type='virtio'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <driver name='vhost' rx_queue_size='512'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <mtu size='1442'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='net0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </interface>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <interface type='ethernet'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <mac address='fa:16:3e:1a:9a:cf'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target dev='tap9852de9e-89'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model type='virtio'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <driver name='vhost' rx_queue_size='512'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <mtu size='1442'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='net1'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </interface>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <serial type='pty'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <source path='/dev/pts/0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <log file='/var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/console.log' append='off'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target type='isa-serial' port='0'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:         <model name='isa-serial'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       </target>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='serial0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </serial>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <console type='pty' tty='/dev/pts/0'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <source path='/dev/pts/0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <log file='/var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/console.log' append='off'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target type='serial' port='0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='serial0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </console>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <input type='tablet' bus='usb'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='input0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='usb' bus='0' port='1'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </input>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <input type='mouse' bus='ps2'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='input1'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </input>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <input type='keyboard' bus='ps2'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='input2'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </input>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <listen type='address' address='::0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </graphics>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <audio id='1' type='none'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <video>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model type='virtio' heads='1' primary='yes'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='video0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </video>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <watchdog model='itco' action='reset'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='watchdog0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </watchdog>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <memballoon model='virtio'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <stats period='10'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='balloon0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </memballoon>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <rng model='virtio'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <backend model='random'>/dev/urandom</backend>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='rng0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </rng>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </devices>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <label>system_u:system_r:svirt_t:s0:c536,c844</label>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c536,c844</imagelabel>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </seclabel>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <label>+107:+107</label>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <imagelabel>+107:+107</imagelabel>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </seclabel>
Nov 23 21:13:09 compute-1 nova_compute[230183]: </domain>
Nov 23 21:13:09 compute-1 nova_compute[230183]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.799 230187 INFO nova.virt.libvirt.driver [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully detached device tap9852de9e-89 from instance 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 from the persistent domain config.
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.799 230187 DEBUG nova.virt.libvirt.driver [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] (1/8): Attempting to detach device tap9852de9e-89 with device alias net1 from instance 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.800 230187 DEBUG nova.virt.libvirt.guest [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] detach device xml: <interface type="ethernet">
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <mac address="fa:16:3e:1a:9a:cf"/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <model type="virtio"/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <driver name="vhost" rx_queue_size="512"/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <mtu size="1442"/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <target dev="tap9852de9e-89"/>
Nov 23 21:13:09 compute-1 nova_compute[230183]: </interface>
Nov 23 21:13:09 compute-1 nova_compute[230183]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 23 21:13:09 compute-1 kernel: tap9852de9e-89 (unregistering): left promiscuous mode
Nov 23 21:13:09 compute-1 NetworkManager[49021]: <info>  [1763932389.8437] device (tap9852de9e-89): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 23 21:13:09 compute-1 ovn_controller[132845]: 2025-11-23T21:13:09Z|00087|binding|INFO|Releasing lport 9852de9e-899c-4a7c-8268-07fee5003eac from this chassis (sb_readonly=0)
Nov 23 21:13:09 compute-1 ovn_controller[132845]: 2025-11-23T21:13:09Z|00088|binding|INFO|Setting lport 9852de9e-899c-4a7c-8268-07fee5003eac down in Southbound
Nov 23 21:13:09 compute-1 ovn_controller[132845]: 2025-11-23T21:13:09Z|00089|binding|INFO|Removing iface tap9852de9e-89 ovn-installed in OVS
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.853 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.859 230187 DEBUG nova.virt.libvirt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Received event <DeviceRemovedEvent: 1763932389.8586385, 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 23 21:13:09 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:09.859 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:9a:cf 10.100.0.23', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': '4bac23b8-7bcd-4f5e-89a8-b035a16ffe36', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a53cafa8-a74e-467c-9117-a31bd6c650ae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c22c132b-3565-4344-9558-f1d93c19cb57, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=9852de9e-899c-4a7c-8268-07fee5003eac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:13:09 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:09.860 142158 INFO neutron.agent.ovn.metadata.agent [-] Port 9852de9e-899c-4a7c-8268-07fee5003eac in datapath a53cafa8-a74e-467c-9117-a31bd6c650ae unbound from our chassis
Nov 23 21:13:09 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:09.861 142158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a53cafa8-a74e-467c-9117-a31bd6c650ae, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.861 230187 DEBUG nova.virt.libvirt.driver [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Start waiting for the detach event from libvirt for device tap9852de9e-89 with device alias net1 for instance 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.862 230187 DEBUG nova.virt.libvirt.guest [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1a:9a:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9852de9e-89"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 23 21:13:09 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:09.862 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[ef8815a5-afc1-4833-b18e-45c901274652]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:09 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:09.863 142158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae namespace which is not needed anymore
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.865 230187 DEBUG nova.virt.libvirt.guest [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1a:9a:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9852de9e-89"/></interface>not found in domain: <domain type='kvm' id='4'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <name>instance-00000006</name>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <uuid>4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</uuid>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <metadata>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <nova:name>tempest-TestNetworkBasicOps-server-1210792474</nova:name>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <nova:creationTime>2025-11-23 21:11:44</nova:creationTime>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <nova:flavor name="m1.nano">
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:memory>128</nova:memory>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:disk>1</nova:disk>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:swap>0</nova:swap>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:ephemeral>0</nova:ephemeral>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:vcpus>1</nova:vcpus>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </nova:flavor>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <nova:owner>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </nova:owner>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <nova:ports>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:port uuid="bdbb1df8-a028-4685-9661-24563619eb80">
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </nova:port>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:port uuid="9852de9e-899c-4a7c-8268-07fee5003eac">
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </nova:port>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </nova:ports>
Nov 23 21:13:09 compute-1 nova_compute[230183]: </nova:instance>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </metadata>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <memory unit='KiB'>131072</memory>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <vcpu placement='static'>1</vcpu>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <resource>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <partition>/machine</partition>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </resource>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <sysinfo type='smbios'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <system>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <entry name='manufacturer'>RDO</entry>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <entry name='product'>OpenStack Compute</entry>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <entry name='serial'>4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</entry>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <entry name='uuid'>4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</entry>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <entry name='family'>Virtual Machine</entry>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </system>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </sysinfo>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <os>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <boot dev='hd'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <smbios mode='sysinfo'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </os>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <features>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <acpi/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <apic/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <vmcoreinfo state='on'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </features>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <cpu mode='custom' match='exact' check='full'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <vendor>AMD</vendor>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='x2apic'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='tsc-deadline'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='hypervisor'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='tsc_adjust'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='spec-ctrl'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='stibp'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='ssbd'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='cmp_legacy'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='overflow-recov'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='succor'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='ibrs'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='amd-ssbd'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='virt-ssbd'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='disable' name='lbrv'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='disable' name='tsc-scale'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='disable' name='vmcb-clean'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='disable' name='flushbyasid'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='disable' name='pause-filter'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='disable' name='pfthreshold'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='disable' name='xsaves'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='disable' name='svm'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='require' name='topoext'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='disable' name='npt'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <feature policy='disable' name='nrip-save'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </cpu>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <clock offset='utc'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <timer name='pit' tickpolicy='delay'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <timer name='hpet' present='no'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </clock>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <on_poweroff>destroy</on_poweroff>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <on_reboot>restart</on_reboot>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <on_crash>destroy</on_crash>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <devices>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <disk type='network' device='disk'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <driver name='qemu' type='raw' cache='none'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <auth username='openstack'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:         <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <source protocol='rbd' name='vms/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk' index='2'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:         <host name='192.168.122.100' port='6789'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:         <host name='192.168.122.102' port='6789'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:         <host name='192.168.122.101' port='6789'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       </source>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target dev='vda' bus='virtio'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='virtio-disk0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <disk type='network' device='cdrom'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <driver name='qemu' type='raw' cache='none'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <auth username='openstack'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:         <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <source protocol='rbd' name='vms/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk.config' index='1'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:         <host name='192.168.122.100' port='6789'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:         <host name='192.168.122.102' port='6789'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:         <host name='192.168.122.101' port='6789'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       </source>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target dev='sda' bus='sata'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <readonly/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='sata0-0-0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='0' model='pcie-root'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pcie.0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='1' port='0x10'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.1'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='2' port='0x11'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.2'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='3' port='0x12'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.3'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='4' port='0x13'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.4'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='5' port='0x14'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.5'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='6' port='0x15'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.6'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='7' port='0x16'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.7'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='8' port='0x17'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.8'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='9' port='0x18'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.9'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='10' port='0x19'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.10'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='11' port='0x1a'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.11'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='12' port='0x1b'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.12'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='13' port='0x1c'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.13'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='14' port='0x1d'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.14'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='15' port='0x1e'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.15'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='16' port='0x1f'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.16'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='17' port='0x20'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.17'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='18' port='0x21'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.18'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='19' port='0x22'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.19'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='20' port='0x23'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.20'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='21' port='0x24'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.21'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='22' port='0x25'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.22'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='23' port='0x26'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.23'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='24' port='0x27'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.24'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target chassis='25' port='0x28'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.25'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model name='pcie-pci-bridge'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='pci.26'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='usb'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <controller type='sata' index='0'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='ide'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <interface type='ethernet'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <mac address='fa:16:3e:f3:c9:f4'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target dev='tapbdbb1df8-a0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model type='virtio'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <driver name='vhost' rx_queue_size='512'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <mtu size='1442'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='net0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </interface>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <serial type='pty'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <source path='/dev/pts/0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <log file='/var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/console.log' append='off'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target type='isa-serial' port='0'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:         <model name='isa-serial'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       </target>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='serial0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </serial>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <console type='pty' tty='/dev/pts/0'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <source path='/dev/pts/0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <log file='/var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/console.log' append='off'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <target type='serial' port='0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='serial0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </console>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <input type='tablet' bus='usb'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='input0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='usb' bus='0' port='1'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </input>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <input type='mouse' bus='ps2'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='input1'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </input>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <input type='keyboard' bus='ps2'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='input2'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </input>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <listen type='address' address='::0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </graphics>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <audio id='1' type='none'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <video>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <model type='virtio' heads='1' primary='yes'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='video0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </video>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <watchdog model='itco' action='reset'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='watchdog0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </watchdog>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <memballoon model='virtio'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <stats period='10'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='balloon0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </memballoon>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <rng model='virtio'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <backend model='random'>/dev/urandom</backend>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <alias name='rng0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </rng>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </devices>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <label>system_u:system_r:svirt_t:s0:c536,c844</label>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c536,c844</imagelabel>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </seclabel>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <label>+107:+107</label>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <imagelabel>+107:+107</imagelabel>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </seclabel>
Nov 23 21:13:09 compute-1 nova_compute[230183]: </domain>
Nov 23 21:13:09 compute-1 nova_compute[230183]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.865 230187 INFO nova.virt.libvirt.driver [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully detached device tap9852de9e-89 from instance 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 from the live domain config.
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.866 230187 DEBUG nova.virt.libvirt.vif [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:11:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1210792474',display_name='tempest-TestNetworkBasicOps-server-1210792474',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1210792474',id=6,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC9I5o3FOJoMlLS5RVHvg4JB6VMA0TLpRAHrRWOuj73hgQ5knZWkP8wznWff+IF5v3eA9GQgz9kKnWlcz54pfIskwjEMQ8tpar2NP2dJjbFuASygJ+AuXJaTUib24SH0fw==',key_name='tempest-TestNetworkBasicOps-192906804',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:11:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-jk4nm00m',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:11:16Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=4bac23b8-7bcd-4f5e-89a8-b035a16ffe36,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.866 230187 DEBUG nova.network.os_vif_util [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.867 230187 DEBUG nova.network.os_vif_util [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:9a:cf,bridge_name='br-int',has_traffic_filtering=True,id=9852de9e-899c-4a7c-8268-07fee5003eac,network=Network(a53cafa8-a74e-467c-9117-a31bd6c650ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9852de9e-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.868 230187 DEBUG os_vif [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:9a:cf,bridge_name='br-int',has_traffic_filtering=True,id=9852de9e-899c-4a7c-8268-07fee5003eac,network=Network(a53cafa8-a74e-467c-9117-a31bd6c650ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9852de9e-89') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.869 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.870 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9852de9e-89, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.871 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.873 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.875 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.877 230187 INFO os_vif [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:9a:cf,bridge_name='br-int',has_traffic_filtering=True,id=9852de9e-899c-4a7c-8268-07fee5003eac,network=Network(a53cafa8-a74e-467c-9117-a31bd6c650ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9852de9e-89')
Nov 23 21:13:09 compute-1 nova_compute[230183]: 2025-11-23 21:13:09.878 230187 DEBUG nova.virt.libvirt.guest [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <nova:name>tempest-TestNetworkBasicOps-server-1210792474</nova:name>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <nova:creationTime>2025-11-23 21:13:09</nova:creationTime>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <nova:flavor name="m1.nano">
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:memory>128</nova:memory>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:disk>1</nova:disk>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:swap>0</nova:swap>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:ephemeral>0</nova:ephemeral>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:vcpus>1</nova:vcpus>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </nova:flavor>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <nova:owner>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </nova:owner>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   <nova:ports>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     <nova:port uuid="bdbb1df8-a028-4685-9661-24563619eb80">
Nov 23 21:13:09 compute-1 nova_compute[230183]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 23 21:13:09 compute-1 nova_compute[230183]:     </nova:port>
Nov 23 21:13:09 compute-1 nova_compute[230183]:   </nova:ports>
Nov 23 21:13:09 compute-1 nova_compute[230183]: </nova:instance>
Nov 23 21:13:09 compute-1 nova_compute[230183]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 23 21:13:09 compute-1 neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae[238060]: [NOTICE]   (238065) : haproxy version is 2.8.14-c23fe91
Nov 23 21:13:09 compute-1 neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae[238060]: [NOTICE]   (238065) : path to executable is /usr/sbin/haproxy
Nov 23 21:13:09 compute-1 neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae[238060]: [WARNING]  (238065) : Exiting Master process...
Nov 23 21:13:09 compute-1 neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae[238060]: [WARNING]  (238065) : Exiting Master process...
Nov 23 21:13:09 compute-1 neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae[238060]: [ALERT]    (238065) : Current worker (238068) exited with code 143 (Terminated)
Nov 23 21:13:09 compute-1 neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae[238060]: [WARNING]  (238065) : All workers exited. Exiting... (0)
Nov 23 21:13:09 compute-1 systemd[1]: libpod-9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed.scope: Deactivated successfully.
Nov 23 21:13:09 compute-1 podman[239976]: 2025-11-23 21:13:09.993936951 +0000 UTC m=+0.040971284 container died 9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 23 21:13:10 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed-userdata-shm.mount: Deactivated successfully.
Nov 23 21:13:10 compute-1 systemd[1]: var-lib-containers-storage-overlay-0150f09f49afcae45b35871ed00a9581191e83bc7cd591edc409336857fd6c40-merged.mount: Deactivated successfully.
Nov 23 21:13:10 compute-1 podman[239976]: 2025-11-23 21:13:10.030490657 +0000 UTC m=+0.077524990 container cleanup 9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 21:13:10 compute-1 systemd[1]: libpod-conmon-9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed.scope: Deactivated successfully.
Nov 23 21:13:10 compute-1 podman[240004]: 2025-11-23 21:13:10.08381275 +0000 UTC m=+0.035828587 container remove 9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 23 21:13:10 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:10.088 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[c5193350-1924-4069-ad7e-0087cd184cde]: (4, ('Sun Nov 23 09:13:09 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae (9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed)\n9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed\nSun Nov 23 09:13:10 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae (9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed)\n9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:10 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:10.089 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[9c2c24f3-6e85-4b4f-ae79-7adf7a6dcab2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:10 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:10.090 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa53cafa8-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.092 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:10 compute-1 kernel: tapa53cafa8-a0: left promiscuous mode
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.104 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.105 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:10 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:10.107 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[42060850-8a1f-4311-89a1-6a6c172921fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:10 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:10.125 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[b3abcc47-2dcf-46f6-bfe1-3cd150ed3c96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:10 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:10.126 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[78799e3f-9a5f-4bcd-9064-2755d62367a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:10 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:10.138 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[a74401e5-b9eb-432a-b748-4a957b63438f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 425314, 'reachable_time': 31134, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240022, 'error': None, 'target': 'ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:10 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:10.140 142272 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 23 21:13:10 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:10.140 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[21921e47-ff0e-4ebc-86fe-3521eb38b1f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:10 compute-1 systemd[1]: run-netns-ovnmeta\x2da53cafa8\x2da74e\x2d467c\x2d9117\x2da31bd6c650ae.mount: Deactivated successfully.
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.418 230187 DEBUG oslo_concurrency.lockutils [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.418 230187 DEBUG oslo_concurrency.lockutils [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquired lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.420 230187 DEBUG nova.network.neutron [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 23 21:13:10 compute-1 ceph-mon[80135]: pgmap v952: 337 pgs: 337 active+clean; 183 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 14 KiB/s wr, 12 op/s
Nov 23 21:13:10 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3558533142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:13:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:13:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:10.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.610 230187 DEBUG nova.compute.manager [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-vif-unplugged-9852de9e-899c-4a7c-8268-07fee5003eac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.610 230187 DEBUG oslo_concurrency.lockutils [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.611 230187 DEBUG oslo_concurrency.lockutils [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.611 230187 DEBUG oslo_concurrency.lockutils [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.611 230187 DEBUG nova.compute.manager [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] No waiting events found dispatching network-vif-unplugged-9852de9e-899c-4a7c-8268-07fee5003eac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.612 230187 WARNING nova.compute.manager [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received unexpected event network-vif-unplugged-9852de9e-899c-4a7c-8268-07fee5003eac for instance with vm_state active and task_state None.
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.612 230187 DEBUG nova.compute.manager [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-vif-plugged-9852de9e-899c-4a7c-8268-07fee5003eac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.612 230187 DEBUG oslo_concurrency.lockutils [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.613 230187 DEBUG oslo_concurrency.lockutils [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.613 230187 DEBUG oslo_concurrency.lockutils [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.613 230187 DEBUG nova.compute.manager [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] No waiting events found dispatching network-vif-plugged-9852de9e-899c-4a7c-8268-07fee5003eac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.613 230187 WARNING nova.compute.manager [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received unexpected event network-vif-plugged-9852de9e-899c-4a7c-8268-07fee5003eac for instance with vm_state active and task_state None.
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.614 230187 DEBUG nova.compute.manager [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-vif-deleted-9852de9e-899c-4a7c-8268-07fee5003eac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.614 230187 INFO nova.compute.manager [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Neutron deleted interface 9852de9e-899c-4a7c-8268-07fee5003eac; detaching it from the instance and deleting it from the info cache
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.614 230187 DEBUG nova.network.neutron [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updating instance_info_cache with network_info: [{"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.639 230187 DEBUG nova.objects.instance [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lazy-loading 'system_metadata' on Instance uuid 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.659 230187 DEBUG nova.objects.instance [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lazy-loading 'flavor' on Instance uuid 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.676 230187 DEBUG nova.virt.libvirt.vif [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:11:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1210792474',display_name='tempest-TestNetworkBasicOps-server-1210792474',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1210792474',id=6,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC9I5o3FOJoMlLS5RVHvg4JB6VMA0TLpRAHrRWOuj73hgQ5knZWkP8wznWff+IF5v3eA9GQgz9kKnWlcz54pfIskwjEMQ8tpar2NP2dJjbFuASygJ+AuXJaTUib24SH0fw==',key_name='tempest-TestNetworkBasicOps-192906804',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:11:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-jk4nm00m',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:11:16Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=4bac23b8-7bcd-4f5e-89a8-b035a16ffe36,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.676 230187 DEBUG nova.network.os_vif_util [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Converting VIF {"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.678 230187 DEBUG nova.network.os_vif_util [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:9a:cf,bridge_name='br-int',has_traffic_filtering=True,id=9852de9e-899c-4a7c-8268-07fee5003eac,network=Network(a53cafa8-a74e-467c-9117-a31bd6c650ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9852de9e-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.683 230187 DEBUG nova.virt.libvirt.guest [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1a:9a:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9852de9e-89"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.688 230187 DEBUG nova.virt.libvirt.guest [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1a:9a:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9852de9e-89"/></interface>not found in domain: <domain type='kvm' id='4'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <name>instance-00000006</name>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <uuid>4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</uuid>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <metadata>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <nova:name>tempest-TestNetworkBasicOps-server-1210792474</nova:name>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <nova:creationTime>2025-11-23 21:13:09</nova:creationTime>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <nova:flavor name="m1.nano">
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <nova:memory>128</nova:memory>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <nova:disk>1</nova:disk>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <nova:swap>0</nova:swap>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <nova:ephemeral>0</nova:ephemeral>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <nova:vcpus>1</nova:vcpus>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </nova:flavor>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <nova:owner>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </nova:owner>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <nova:ports>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <nova:port uuid="bdbb1df8-a028-4685-9661-24563619eb80">
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </nova:port>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </nova:ports>
Nov 23 21:13:10 compute-1 nova_compute[230183]: </nova:instance>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </metadata>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <memory unit='KiB'>131072</memory>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <vcpu placement='static'>1</vcpu>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <resource>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <partition>/machine</partition>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </resource>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <sysinfo type='smbios'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <system>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <entry name='manufacturer'>RDO</entry>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <entry name='product'>OpenStack Compute</entry>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <entry name='serial'>4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</entry>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <entry name='uuid'>4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</entry>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <entry name='family'>Virtual Machine</entry>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </system>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </sysinfo>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <os>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <boot dev='hd'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <smbios mode='sysinfo'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </os>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <features>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <acpi/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <apic/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <vmcoreinfo state='on'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </features>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <cpu mode='custom' match='exact' check='full'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <vendor>AMD</vendor>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='x2apic'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='tsc-deadline'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='hypervisor'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='tsc_adjust'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='spec-ctrl'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='stibp'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='ssbd'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='cmp_legacy'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='overflow-recov'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='succor'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='ibrs'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='amd-ssbd'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='virt-ssbd'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='disable' name='lbrv'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='disable' name='tsc-scale'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='disable' name='vmcb-clean'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='disable' name='flushbyasid'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='disable' name='pause-filter'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='disable' name='pfthreshold'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='disable' name='xsaves'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='disable' name='svm'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='topoext'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='disable' name='npt'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='disable' name='nrip-save'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </cpu>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <clock offset='utc'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <timer name='pit' tickpolicy='delay'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <timer name='hpet' present='no'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </clock>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <on_poweroff>destroy</on_poweroff>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <on_reboot>restart</on_reboot>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <on_crash>destroy</on_crash>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <devices>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <disk type='network' device='disk'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <driver name='qemu' type='raw' cache='none'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <auth username='openstack'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:         <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <source protocol='rbd' name='vms/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk' index='2'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:         <host name='192.168.122.100' port='6789'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:         <host name='192.168.122.102' port='6789'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:         <host name='192.168.122.101' port='6789'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       </source>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target dev='vda' bus='virtio'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='virtio-disk0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <disk type='network' device='cdrom'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <driver name='qemu' type='raw' cache='none'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <auth username='openstack'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:         <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <source protocol='rbd' name='vms/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk.config' index='1'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:         <host name='192.168.122.100' port='6789'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:         <host name='192.168.122.102' port='6789'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:         <host name='192.168.122.101' port='6789'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       </source>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target dev='sda' bus='sata'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <readonly/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='sata0-0-0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='0' model='pcie-root'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pcie.0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='1' port='0x10'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.1'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='2' port='0x11'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.2'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='3' port='0x12'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.3'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='4' port='0x13'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.4'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='5' port='0x14'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.5'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='6' port='0x15'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.6'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='7' port='0x16'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.7'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='8' port='0x17'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.8'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='9' port='0x18'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.9'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='10' port='0x19'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.10'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='11' port='0x1a'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.11'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='12' port='0x1b'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.12'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='13' port='0x1c'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.13'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='14' port='0x1d'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.14'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='15' port='0x1e'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.15'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='16' port='0x1f'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.16'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='17' port='0x20'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.17'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='18' port='0x21'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.18'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='19' port='0x22'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.19'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='20' port='0x23'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.20'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='21' port='0x24'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.21'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='22' port='0x25'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.22'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='23' port='0x26'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.23'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='24' port='0x27'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.24'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='25' port='0x28'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.25'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-pci-bridge'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.26'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='usb'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='sata' index='0'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='ide'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <interface type='ethernet'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <mac address='fa:16:3e:f3:c9:f4'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target dev='tapbdbb1df8-a0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model type='virtio'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <driver name='vhost' rx_queue_size='512'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <mtu size='1442'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='net0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </interface>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <serial type='pty'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <source path='/dev/pts/0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <log file='/var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/console.log' append='off'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target type='isa-serial' port='0'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:         <model name='isa-serial'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       </target>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='serial0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </serial>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <console type='pty' tty='/dev/pts/0'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <source path='/dev/pts/0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <log file='/var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/console.log' append='off'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target type='serial' port='0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='serial0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </console>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <input type='tablet' bus='usb'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='input0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='usb' bus='0' port='1'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </input>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <input type='mouse' bus='ps2'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='input1'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </input>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <input type='keyboard' bus='ps2'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='input2'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </input>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <listen type='address' address='::0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </graphics>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <audio id='1' type='none'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <video>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model type='virtio' heads='1' primary='yes'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='video0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </video>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <watchdog model='itco' action='reset'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='watchdog0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </watchdog>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <memballoon model='virtio'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <stats period='10'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='balloon0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </memballoon>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <rng model='virtio'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <backend model='random'>/dev/urandom</backend>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='rng0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </rng>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </devices>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <label>system_u:system_r:svirt_t:s0:c536,c844</label>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c536,c844</imagelabel>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </seclabel>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <label>+107:+107</label>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <imagelabel>+107:+107</imagelabel>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </seclabel>
Nov 23 21:13:10 compute-1 nova_compute[230183]: </domain>
Nov 23 21:13:10 compute-1 nova_compute[230183]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.689 230187 DEBUG nova.virt.libvirt.guest [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1a:9a:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9852de9e-89"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.693 230187 DEBUG nova.virt.libvirt.guest [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1a:9a:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9852de9e-89"/></interface>not found in domain: <domain type='kvm' id='4'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <name>instance-00000006</name>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <uuid>4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</uuid>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <metadata>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <nova:name>tempest-TestNetworkBasicOps-server-1210792474</nova:name>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <nova:creationTime>2025-11-23 21:13:09</nova:creationTime>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <nova:flavor name="m1.nano">
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <nova:memory>128</nova:memory>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <nova:disk>1</nova:disk>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <nova:swap>0</nova:swap>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <nova:ephemeral>0</nova:ephemeral>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <nova:vcpus>1</nova:vcpus>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </nova:flavor>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <nova:owner>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </nova:owner>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <nova:ports>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <nova:port uuid="bdbb1df8-a028-4685-9661-24563619eb80">
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </nova:port>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </nova:ports>
Nov 23 21:13:10 compute-1 nova_compute[230183]: </nova:instance>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </metadata>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <memory unit='KiB'>131072</memory>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <vcpu placement='static'>1</vcpu>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <resource>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <partition>/machine</partition>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </resource>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <sysinfo type='smbios'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <system>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <entry name='manufacturer'>RDO</entry>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <entry name='product'>OpenStack Compute</entry>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <entry name='serial'>4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</entry>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <entry name='uuid'>4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</entry>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <entry name='family'>Virtual Machine</entry>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </system>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </sysinfo>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <os>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <boot dev='hd'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <smbios mode='sysinfo'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </os>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <features>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <acpi/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <apic/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <vmcoreinfo state='on'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </features>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <cpu mode='custom' match='exact' check='full'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <vendor>AMD</vendor>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='x2apic'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='tsc-deadline'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='hypervisor'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='tsc_adjust'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='spec-ctrl'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='stibp'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='ssbd'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='cmp_legacy'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='overflow-recov'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='succor'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='ibrs'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='amd-ssbd'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='virt-ssbd'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='disable' name='lbrv'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='disable' name='tsc-scale'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='disable' name='vmcb-clean'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='disable' name='flushbyasid'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='disable' name='pause-filter'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='disable' name='pfthreshold'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='disable' name='xsaves'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='disable' name='svm'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='require' name='topoext'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='disable' name='npt'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <feature policy='disable' name='nrip-save'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </cpu>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <clock offset='utc'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <timer name='pit' tickpolicy='delay'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <timer name='hpet' present='no'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </clock>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <on_poweroff>destroy</on_poweroff>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <on_reboot>restart</on_reboot>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <on_crash>destroy</on_crash>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <devices>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <disk type='network' device='disk'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <driver name='qemu' type='raw' cache='none'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <auth username='openstack'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:         <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <source protocol='rbd' name='vms/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk' index='2'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:         <host name='192.168.122.100' port='6789'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:         <host name='192.168.122.102' port='6789'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:         <host name='192.168.122.101' port='6789'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       </source>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target dev='vda' bus='virtio'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='virtio-disk0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <disk type='network' device='cdrom'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <driver name='qemu' type='raw' cache='none'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <auth username='openstack'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:         <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <source protocol='rbd' name='vms/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk.config' index='1'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:         <host name='192.168.122.100' port='6789'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:         <host name='192.168.122.102' port='6789'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:         <host name='192.168.122.101' port='6789'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       </source>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target dev='sda' bus='sata'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <readonly/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='sata0-0-0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='0' model='pcie-root'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pcie.0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='1' port='0x10'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.1'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='2' port='0x11'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.2'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='3' port='0x12'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.3'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='4' port='0x13'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.4'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='5' port='0x14'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.5'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='6' port='0x15'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.6'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='7' port='0x16'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.7'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='8' port='0x17'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.8'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='9' port='0x18'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.9'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='10' port='0x19'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.10'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='11' port='0x1a'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.11'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='12' port='0x1b'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.12'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='13' port='0x1c'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.13'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='14' port='0x1d'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.14'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='15' port='0x1e'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.15'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='16' port='0x1f'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.16'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='17' port='0x20'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.17'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='18' port='0x21'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.18'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='19' port='0x22'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.19'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='20' port='0x23'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.20'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='21' port='0x24'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.21'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='22' port='0x25'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.22'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='23' port='0x26'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.23'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='24' port='0x27'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.24'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-root-port'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target chassis='25' port='0x28'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.25'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model name='pcie-pci-bridge'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='pci.26'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='usb'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <controller type='sata' index='0'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='ide'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </controller>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <interface type='ethernet'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <mac address='fa:16:3e:f3:c9:f4'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target dev='tapbdbb1df8-a0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model type='virtio'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <driver name='vhost' rx_queue_size='512'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <mtu size='1442'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='net0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </interface>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <serial type='pty'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <source path='/dev/pts/0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <log file='/var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/console.log' append='off'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target type='isa-serial' port='0'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:         <model name='isa-serial'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       </target>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='serial0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </serial>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <console type='pty' tty='/dev/pts/0'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <source path='/dev/pts/0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <log file='/var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/console.log' append='off'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <target type='serial' port='0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='serial0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </console>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <input type='tablet' bus='usb'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='input0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='usb' bus='0' port='1'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </input>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <input type='mouse' bus='ps2'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='input1'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </input>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <input type='keyboard' bus='ps2'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='input2'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </input>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <listen type='address' address='::0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </graphics>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <audio id='1' type='none'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <video>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <model type='virtio' heads='1' primary='yes'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='video0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </video>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <watchdog model='itco' action='reset'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='watchdog0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </watchdog>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <memballoon model='virtio'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <stats period='10'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='balloon0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </memballoon>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <rng model='virtio'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <backend model='random'>/dev/urandom</backend>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <alias name='rng0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </rng>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </devices>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <label>system_u:system_r:svirt_t:s0:c536,c844</label>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c536,c844</imagelabel>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </seclabel>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <label>+107:+107</label>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <imagelabel>+107:+107</imagelabel>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </seclabel>
Nov 23 21:13:10 compute-1 nova_compute[230183]: </domain>
Nov 23 21:13:10 compute-1 nova_compute[230183]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.694 230187 WARNING nova.virt.libvirt.driver [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Detaching interface fa:16:3e:1a:9a:cf failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap9852de9e-89' not found.
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.695 230187 DEBUG nova.virt.libvirt.vif [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:11:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1210792474',display_name='tempest-TestNetworkBasicOps-server-1210792474',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1210792474',id=6,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC9I5o3FOJoMlLS5RVHvg4JB6VMA0TLpRAHrRWOuj73hgQ5knZWkP8wznWff+IF5v3eA9GQgz9kKnWlcz54pfIskwjEMQ8tpar2NP2dJjbFuASygJ+AuXJaTUib24SH0fw==',key_name='tempest-TestNetworkBasicOps-192906804',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:11:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-jk4nm00m',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:11:16Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=4bac23b8-7bcd-4f5e-89a8-b035a16ffe36,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.696 230187 DEBUG nova.network.os_vif_util [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Converting VIF {"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.697 230187 DEBUG nova.network.os_vif_util [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:9a:cf,bridge_name='br-int',has_traffic_filtering=True,id=9852de9e-899c-4a7c-8268-07fee5003eac,network=Network(a53cafa8-a74e-467c-9117-a31bd6c650ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9852de9e-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.697 230187 DEBUG os_vif [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:9a:cf,bridge_name='br-int',has_traffic_filtering=True,id=9852de9e-899c-4a7c-8268-07fee5003eac,network=Network(a53cafa8-a74e-467c-9117-a31bd6c650ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9852de9e-89') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.700 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.700 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9852de9e-89, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.701 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.703 230187 INFO os_vif [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:9a:cf,bridge_name='br-int',has_traffic_filtering=True,id=9852de9e-899c-4a7c-8268-07fee5003eac,network=Network(a53cafa8-a74e-467c-9117-a31bd6c650ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9852de9e-89')
Nov 23 21:13:10 compute-1 nova_compute[230183]: 2025-11-23 21:13:10.704 230187 DEBUG nova.virt.libvirt.guest [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <nova:name>tempest-TestNetworkBasicOps-server-1210792474</nova:name>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <nova:creationTime>2025-11-23 21:13:10</nova:creationTime>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <nova:flavor name="m1.nano">
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <nova:memory>128</nova:memory>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <nova:disk>1</nova:disk>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <nova:swap>0</nova:swap>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <nova:ephemeral>0</nova:ephemeral>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <nova:vcpus>1</nova:vcpus>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </nova:flavor>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <nova:owner>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </nova:owner>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   <nova:ports>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     <nova:port uuid="bdbb1df8-a028-4685-9661-24563619eb80">
Nov 23 21:13:10 compute-1 nova_compute[230183]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 23 21:13:10 compute-1 nova_compute[230183]:     </nova:port>
Nov 23 21:13:10 compute-1 nova_compute[230183]:   </nova:ports>
Nov 23 21:13:10 compute-1 nova_compute[230183]: </nova:instance>
Nov 23 21:13:10 compute-1 nova_compute[230183]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 23 21:13:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:11.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:11 compute-1 nova_compute[230183]: 2025-11-23 21:13:11.653 230187 INFO nova.network.neutron [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Port 9852de9e-899c-4a7c-8268-07fee5003eac from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 23 21:13:11 compute-1 nova_compute[230183]: 2025-11-23 21:13:11.654 230187 DEBUG nova.network.neutron [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updating instance_info_cache with network_info: [{"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:13:11 compute-1 nova_compute[230183]: 2025-11-23 21:13:11.669 230187 DEBUG oslo_concurrency.lockutils [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Releasing lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:13:11 compute-1 nova_compute[230183]: 2025-11-23 21:13:11.687 230187 DEBUG oslo_concurrency.lockutils [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "interface-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-9852de9e-899c-4a7c-8268-07fee5003eac" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 1.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:13:11 compute-1 ovn_controller[132845]: 2025-11-23T21:13:11Z|00090|binding|INFO|Releasing lport 882afaa1-9000-493d-808e-b1d906b6e642 from this chassis (sb_readonly=0)
Nov 23 21:13:11 compute-1 nova_compute[230183]: 2025-11-23 21:13:11.854 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.277 230187 DEBUG nova.compute.manager [req-78cc67d0-1bfd-47af-9959-6521b64f48e6 req-b2d7ff5b-2306-4817-8d26-d05593c96ee6 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-changed-bdbb1df8-a028-4685-9661-24563619eb80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.278 230187 DEBUG nova.compute.manager [req-78cc67d0-1bfd-47af-9959-6521b64f48e6 req-b2d7ff5b-2306-4817-8d26-d05593c96ee6 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Refreshing instance network info cache due to event network-changed-bdbb1df8-a028-4685-9661-24563619eb80. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.278 230187 DEBUG oslo_concurrency.lockutils [req-78cc67d0-1bfd-47af-9959-6521b64f48e6 req-b2d7ff5b-2306-4817-8d26-d05593c96ee6 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.278 230187 DEBUG oslo_concurrency.lockutils [req-78cc67d0-1bfd-47af-9959-6521b64f48e6 req-b2d7ff5b-2306-4817-8d26-d05593c96ee6 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.278 230187 DEBUG nova.network.neutron [req-78cc67d0-1bfd-47af-9959-6521b64f48e6 req-b2d7ff5b-2306-4817-8d26-d05593c96ee6 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Refreshing network info cache for port bdbb1df8-a028-4685-9661-24563619eb80 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.317 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.367 230187 DEBUG oslo_concurrency.lockutils [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.368 230187 DEBUG oslo_concurrency.lockutils [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.368 230187 DEBUG oslo_concurrency.lockutils [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.368 230187 DEBUG oslo_concurrency.lockutils [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.368 230187 DEBUG oslo_concurrency.lockutils [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.369 230187 INFO nova.compute.manager [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Terminating instance
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.370 230187 DEBUG nova.compute.manager [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 23 21:13:12 compute-1 kernel: tapbdbb1df8-a0 (unregistering): left promiscuous mode
Nov 23 21:13:12 compute-1 NetworkManager[49021]: <info>  [1763932392.4220] device (tapbdbb1df8-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.428 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:12 compute-1 ovn_controller[132845]: 2025-11-23T21:13:12Z|00091|binding|INFO|Releasing lport bdbb1df8-a028-4685-9661-24563619eb80 from this chassis (sb_readonly=0)
Nov 23 21:13:12 compute-1 ovn_controller[132845]: 2025-11-23T21:13:12Z|00092|binding|INFO|Setting lport bdbb1df8-a028-4685-9661-24563619eb80 down in Southbound
Nov 23 21:13:12 compute-1 ovn_controller[132845]: 2025-11-23T21:13:12Z|00093|binding|INFO|Removing iface tapbdbb1df8-a0 ovn-installed in OVS
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.429 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:12 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.439 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:c9:f4 10.100.0.12'], port_security=['fa:16:3e:f3:c9:f4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4bac23b8-7bcd-4f5e-89a8-b035a16ffe36', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa502c12-d22c-490c-942b-57c2b1624866', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '30b87ecc-e7bf-46f1-a605-8bcfe0ecba45', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8207d226-2b2e-4ad5-9d7b-3777cdc61652, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=bdbb1df8-a028-4685-9661-24563619eb80) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:13:12 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.440 142158 INFO neutron.agent.ovn.metadata.agent [-] Port bdbb1df8-a028-4685-9661-24563619eb80 in datapath aa502c12-d22c-490c-942b-57c2b1624866 unbound from our chassis
Nov 23 21:13:12 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.441 142158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa502c12-d22c-490c-942b-57c2b1624866, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 23 21:13:12 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.442 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[28a61e05-1b76-4e31-bc53-b5799b96bbc7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:12 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.442 142158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866 namespace which is not needed anymore
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.447 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:12 compute-1 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000006.scope: Deactivated successfully.
Nov 23 21:13:12 compute-1 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000006.scope: Consumed 18.410s CPU time.
Nov 23 21:13:12 compute-1 systemd-machined[193469]: Machine qemu-4-instance-00000006 terminated.
Nov 23 21:13:12 compute-1 ceph-mon[80135]: pgmap v953: 337 pgs: 337 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.8 KiB/s wr, 29 op/s
Nov 23 21:13:12 compute-1 neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866[237700]: [NOTICE]   (237704) : haproxy version is 2.8.14-c23fe91
Nov 23 21:13:12 compute-1 neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866[237700]: [NOTICE]   (237704) : path to executable is /usr/sbin/haproxy
Nov 23 21:13:12 compute-1 neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866[237700]: [WARNING]  (237704) : Exiting Master process...
Nov 23 21:13:12 compute-1 neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866[237700]: [ALERT]    (237704) : Current worker (237706) exited with code 143 (Terminated)
Nov 23 21:13:12 compute-1 neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866[237700]: [WARNING]  (237704) : All workers exited. Exiting... (0)
Nov 23 21:13:12 compute-1 systemd[1]: libpod-ae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600.scope: Deactivated successfully.
Nov 23 21:13:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:12.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:12 compute-1 podman[240046]: 2025-11-23 21:13:12.556299917 +0000 UTC m=+0.039925227 container died ae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 21:13:12 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600-userdata-shm.mount: Deactivated successfully.
Nov 23 21:13:12 compute-1 systemd[1]: var-lib-containers-storage-overlay-1d0c50b76b192f5ce5a4fd663eee6064b85b526a900eeee678e7ce0a629a71ae-merged.mount: Deactivated successfully.
Nov 23 21:13:12 compute-1 NetworkManager[49021]: <info>  [1763932392.5899] manager: (tapbdbb1df8-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/58)
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.591 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:12 compute-1 podman[240046]: 2025-11-23 21:13:12.593626632 +0000 UTC m=+0.077251942 container cleanup ae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.596 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:12 compute-1 systemd[1]: libpod-conmon-ae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600.scope: Deactivated successfully.
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.605 230187 INFO nova.virt.libvirt.driver [-] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Instance destroyed successfully.
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.605 230187 DEBUG nova.objects.instance [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'resources' on Instance uuid 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.627 230187 DEBUG nova.virt.libvirt.vif [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:11:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1210792474',display_name='tempest-TestNetworkBasicOps-server-1210792474',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1210792474',id=6,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC9I5o3FOJoMlLS5RVHvg4JB6VMA0TLpRAHrRWOuj73hgQ5knZWkP8wznWff+IF5v3eA9GQgz9kKnWlcz54pfIskwjEMQ8tpar2NP2dJjbFuASygJ+AuXJaTUib24SH0fw==',key_name='tempest-TestNetworkBasicOps-192906804',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:11:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-jk4nm00m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:11:16Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=4bac23b8-7bcd-4f5e-89a8-b035a16ffe36,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.628 230187 DEBUG nova.network.os_vif_util [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.629 230187 DEBUG nova.network.os_vif_util [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f3:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=bdbb1df8-a028-4685-9661-24563619eb80,network=Network(aa502c12-d22c-490c-942b-57c2b1624866),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdbb1df8-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.629 230187 DEBUG os_vif [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=bdbb1df8-a028-4685-9661-24563619eb80,network=Network(aa502c12-d22c-490c-942b-57c2b1624866),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdbb1df8-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.630 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.631 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbdbb1df8-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.632 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.635 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.637 230187 INFO os_vif [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=bdbb1df8-a028-4685-9661-24563619eb80,network=Network(aa502c12-d22c-490c-942b-57c2b1624866),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdbb1df8-a0')
Nov 23 21:13:12 compute-1 podman[240087]: 2025-11-23 21:13:12.656800397 +0000 UTC m=+0.037880401 container remove ae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 21:13:12 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.663 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[379bbf24-ac77-4378-86b6-7769de929ff6]: (4, ('Sun Nov 23 09:13:12 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866 (ae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600)\nae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600\nSun Nov 23 09:13:12 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866 (ae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600)\nae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:12 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.665 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5d56225d-22df-4120-8b37-4b556c7dc6cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:12 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.665 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa502c12-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.667 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:12 compute-1 kernel: tapaa502c12-d0: left promiscuous mode
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.685 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:12 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.688 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[230dd5e7-80ba-4212-88aa-54ab457a62e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:12 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.700 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[15bad5e6-45fb-4918-8ff4-c39103d409fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:12 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.701 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[d6ccf312-72ec-4b61-9208-b5159d253fa9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:12 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.723 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[723dfb59-2fbf-4c8c-a12f-8b12eccc23ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422441, 'reachable_time': 18666, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240119, 'error': None, 'target': 'ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:12 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.726 142272 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 23 21:13:12 compute-1 systemd[1]: run-netns-ovnmeta\x2daa502c12\x2dd22c\x2d490c\x2d942b\x2d57c2b1624866.mount: Deactivated successfully.
Nov 23 21:13:12 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.726 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[1a29f803-9703-438b-9300-0a7f6d1c7ea3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.846 230187 DEBUG nova.compute.manager [req-2f8f9936-3ef5-4c95-94a8-748eca89f5d2 req-cd71013f-cf0b-4457-ba28-f3f77d839aa4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-vif-unplugged-bdbb1df8-a028-4685-9661-24563619eb80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.846 230187 DEBUG oslo_concurrency.lockutils [req-2f8f9936-3ef5-4c95-94a8-748eca89f5d2 req-cd71013f-cf0b-4457-ba28-f3f77d839aa4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.847 230187 DEBUG oslo_concurrency.lockutils [req-2f8f9936-3ef5-4c95-94a8-748eca89f5d2 req-cd71013f-cf0b-4457-ba28-f3f77d839aa4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.847 230187 DEBUG oslo_concurrency.lockutils [req-2f8f9936-3ef5-4c95-94a8-748eca89f5d2 req-cd71013f-cf0b-4457-ba28-f3f77d839aa4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.847 230187 DEBUG nova.compute.manager [req-2f8f9936-3ef5-4c95-94a8-748eca89f5d2 req-cd71013f-cf0b-4457-ba28-f3f77d839aa4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] No waiting events found dispatching network-vif-unplugged-bdbb1df8-a028-4685-9661-24563619eb80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:13:12 compute-1 nova_compute[230183]: 2025-11-23 21:13:12.847 230187 DEBUG nova.compute.manager [req-2f8f9936-3ef5-4c95-94a8-748eca89f5d2 req-cd71013f-cf0b-4457-ba28-f3f77d839aa4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-vif-unplugged-bdbb1df8-a028-4685-9661-24563619eb80 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 23 21:13:13 compute-1 nova_compute[230183]: 2025-11-23 21:13:13.081 230187 INFO nova.virt.libvirt.driver [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Deleting instance files /var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_del
Nov 23 21:13:13 compute-1 nova_compute[230183]: 2025-11-23 21:13:13.082 230187 INFO nova.virt.libvirt.driver [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Deletion of /var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_del complete
Nov 23 21:13:13 compute-1 nova_compute[230183]: 2025-11-23 21:13:13.154 230187 INFO nova.compute.manager [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Took 0.78 seconds to destroy the instance on the hypervisor.
Nov 23 21:13:13 compute-1 nova_compute[230183]: 2025-11-23 21:13:13.155 230187 DEBUG oslo.service.loopingcall [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 23 21:13:13 compute-1 nova_compute[230183]: 2025-11-23 21:13:13.155 230187 DEBUG nova.compute.manager [-] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 23 21:13:13 compute-1 nova_compute[230183]: 2025-11-23 21:13:13.155 230187 DEBUG nova.network.neutron [-] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 23 21:13:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:13.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:14 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:14.492 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:13:14 compute-1 ceph-mon[80135]: pgmap v954: 337 pgs: 337 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.5 KiB/s wr, 29 op/s
Nov 23 21:13:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/935920564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:13:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:14.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:14 compute-1 nova_compute[230183]: 2025-11-23 21:13:14.606 230187 DEBUG nova.network.neutron [-] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:13:14 compute-1 nova_compute[230183]: 2025-11-23 21:13:14.630 230187 DEBUG nova.network.neutron [req-78cc67d0-1bfd-47af-9959-6521b64f48e6 req-b2d7ff5b-2306-4817-8d26-d05593c96ee6 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updated VIF entry in instance network info cache for port bdbb1df8-a028-4685-9661-24563619eb80. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 23 21:13:14 compute-1 nova_compute[230183]: 2025-11-23 21:13:14.630 230187 DEBUG nova.network.neutron [req-78cc67d0-1bfd-47af-9959-6521b64f48e6 req-b2d7ff5b-2306-4817-8d26-d05593c96ee6 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updating instance_info_cache with network_info: [{"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:13:14 compute-1 nova_compute[230183]: 2025-11-23 21:13:14.656 230187 INFO nova.compute.manager [-] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Took 1.50 seconds to deallocate network for instance.
Nov 23 21:13:14 compute-1 nova_compute[230183]: 2025-11-23 21:13:14.683 230187 DEBUG oslo_concurrency.lockutils [req-78cc67d0-1bfd-47af-9959-6521b64f48e6 req-b2d7ff5b-2306-4817-8d26-d05593c96ee6 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:13:14 compute-1 nova_compute[230183]: 2025-11-23 21:13:14.724 230187 DEBUG oslo_concurrency.lockutils [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:13:14 compute-1 nova_compute[230183]: 2025-11-23 21:13:14.725 230187 DEBUG oslo_concurrency.lockutils [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:13:14 compute-1 nova_compute[230183]: 2025-11-23 21:13:14.746 230187 DEBUG nova.compute.manager [req-aa11cde4-66ee-425e-8d26-5056ed623d73 req-fa81abc1-0598-4bc3-898c-35bd659f2ac8 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-vif-deleted-bdbb1df8-a028-4685-9661-24563619eb80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:13:14 compute-1 nova_compute[230183]: 2025-11-23 21:13:14.747 230187 INFO nova.compute.manager [req-aa11cde4-66ee-425e-8d26-5056ed623d73 req-fa81abc1-0598-4bc3-898c-35bd659f2ac8 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Neutron deleted interface bdbb1df8-a028-4685-9661-24563619eb80; detaching it from the instance and deleting it from the info cache
Nov 23 21:13:14 compute-1 nova_compute[230183]: 2025-11-23 21:13:14.747 230187 DEBUG nova.network.neutron [req-aa11cde4-66ee-425e-8d26-5056ed623d73 req-fa81abc1-0598-4bc3-898c-35bd659f2ac8 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:13:14 compute-1 nova_compute[230183]: 2025-11-23 21:13:14.769 230187 DEBUG nova.compute.manager [req-aa11cde4-66ee-425e-8d26-5056ed623d73 req-fa81abc1-0598-4bc3-898c-35bd659f2ac8 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Detach interface failed, port_id=bdbb1df8-a028-4685-9661-24563619eb80, reason: Instance 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 23 21:13:14 compute-1 nova_compute[230183]: 2025-11-23 21:13:14.771 230187 DEBUG oslo_concurrency.processutils [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:13:14 compute-1 nova_compute[230183]: 2025-11-23 21:13:14.948 230187 DEBUG nova.compute.manager [req-a54300cf-3307-45c3-aba2-70eb4c9aea59 req-b828c63c-a5e2-4b69-8ec8-759ec285e08e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-vif-plugged-bdbb1df8-a028-4685-9661-24563619eb80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:13:14 compute-1 nova_compute[230183]: 2025-11-23 21:13:14.949 230187 DEBUG oslo_concurrency.lockutils [req-a54300cf-3307-45c3-aba2-70eb4c9aea59 req-b828c63c-a5e2-4b69-8ec8-759ec285e08e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:13:14 compute-1 nova_compute[230183]: 2025-11-23 21:13:14.949 230187 DEBUG oslo_concurrency.lockutils [req-a54300cf-3307-45c3-aba2-70eb4c9aea59 req-b828c63c-a5e2-4b69-8ec8-759ec285e08e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:13:14 compute-1 nova_compute[230183]: 2025-11-23 21:13:14.949 230187 DEBUG oslo_concurrency.lockutils [req-a54300cf-3307-45c3-aba2-70eb4c9aea59 req-b828c63c-a5e2-4b69-8ec8-759ec285e08e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:13:14 compute-1 nova_compute[230183]: 2025-11-23 21:13:14.949 230187 DEBUG nova.compute.manager [req-a54300cf-3307-45c3-aba2-70eb4c9aea59 req-b828c63c-a5e2-4b69-8ec8-759ec285e08e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] No waiting events found dispatching network-vif-plugged-bdbb1df8-a028-4685-9661-24563619eb80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:13:14 compute-1 nova_compute[230183]: 2025-11-23 21:13:14.949 230187 WARNING nova.compute.manager [req-a54300cf-3307-45c3-aba2-70eb4c9aea59 req-b828c63c-a5e2-4b69-8ec8-759ec285e08e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received unexpected event network-vif-plugged-bdbb1df8-a028-4685-9661-24563619eb80 for instance with vm_state deleted and task_state None.
Nov 23 21:13:15 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:13:15 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3698391352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:13:15 compute-1 nova_compute[230183]: 2025-11-23 21:13:15.204 230187 DEBUG oslo_concurrency.processutils [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:13:15 compute-1 nova_compute[230183]: 2025-11-23 21:13:15.209 230187 DEBUG nova.compute.provider_tree [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:13:15 compute-1 nova_compute[230183]: 2025-11-23 21:13:15.224 230187 DEBUG nova.scheduler.client.report [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:13:15 compute-1 nova_compute[230183]: 2025-11-23 21:13:15.249 230187 DEBUG oslo_concurrency.lockutils [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:13:15 compute-1 nova_compute[230183]: 2025-11-23 21:13:15.277 230187 INFO nova.scheduler.client.report [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Deleted allocations for instance 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36
Nov 23 21:13:15 compute-1 nova_compute[230183]: 2025-11-23 21:13:15.369 230187 DEBUG oslo_concurrency.lockutils [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:13:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:15.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:15 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3698391352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:13:15 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2611156026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:13:15 compute-1 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 23 21:13:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:16.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:16 compute-1 ceph-mon[80135]: pgmap v955: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.7 KiB/s wr, 57 op/s
Nov 23 21:13:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:13:17 compute-1 nova_compute[230183]: 2025-11-23 21:13:17.318 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:17.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:17 compute-1 nova_compute[230183]: 2025-11-23 21:13:17.632 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:18 compute-1 sshd-session[240147]: Invalid user sol from 161.35.133.66 port 52098
Nov 23 21:13:18 compute-1 sshd-session[240147]: Connection closed by invalid user sol 161.35.133.66 port 52098 [preauth]
Nov 23 21:13:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:18.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:18 compute-1 ceph-mon[80135]: pgmap v956: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 4.3 KiB/s wr, 56 op/s
Nov 23 21:13:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:13:18 compute-1 nova_compute[230183]: 2025-11-23 21:13:18.759 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:18 compute-1 nova_compute[230183]: 2025-11-23 21:13:18.829 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:13:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:19.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:13:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:13:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:20.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:13:20 compute-1 ceph-mon[80135]: pgmap v957: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 4.3 KiB/s wr, 57 op/s
Nov 23 21:13:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:21.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:22 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:13:22 compute-1 nova_compute[230183]: 2025-11-23 21:13:22.320 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:13:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:22.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:13:22 compute-1 ceph-mon[80135]: pgmap v958: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.7 KiB/s wr, 48 op/s
Nov 23 21:13:22 compute-1 nova_compute[230183]: 2025-11-23 21:13:22.634 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:23.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:24.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:24 compute-1 ceph-mon[80135]: pgmap v959: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 23 21:13:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:25.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:25 compute-1 podman[240155]: 2025-11-23 21:13:25.654485065 +0000 UTC m=+0.061344237 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 23 21:13:25 compute-1 podman[240154]: 2025-11-23 21:13:25.691626197 +0000 UTC m=+0.093621299 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 21:13:25 compute-1 sudo[240192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:13:25 compute-1 sudo[240192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:13:25 compute-1 sudo[240192]: pam_unix(sudo:session): session closed for user root
Nov 23 21:13:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:13:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:26.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:13:26 compute-1 ceph-mon[80135]: pgmap v960: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 23 21:13:27 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:13:27 compute-1 nova_compute[230183]: 2025-11-23 21:13:27.322 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:27.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:27 compute-1 nova_compute[230183]: 2025-11-23 21:13:27.603 230187 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763932392.6023915, 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:13:27 compute-1 nova_compute[230183]: 2025-11-23 21:13:27.603 230187 INFO nova.compute.manager [-] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] VM Stopped (Lifecycle Event)
Nov 23 21:13:27 compute-1 nova_compute[230183]: 2025-11-23 21:13:27.619 230187 DEBUG nova.compute.manager [None req-2669e519-7d87-4110-945c-4c498466f9bf - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:13:27 compute-1 nova_compute[230183]: 2025-11-23 21:13:27.675 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:13:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:28.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:13:28 compute-1 ceph-mon[80135]: pgmap v961: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:13:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:29.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:13:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:30.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:13:30 compute-1 podman[240225]: 2025-11-23 21:13:30.641766549 +0000 UTC m=+0.055315657 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 21:13:30 compute-1 ceph-mon[80135]: pgmap v962: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 23 21:13:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:13:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:31.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:13:32 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:13:32 compute-1 nova_compute[230183]: 2025-11-23 21:13:32.324 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:32.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:32 compute-1 nova_compute[230183]: 2025-11-23 21:13:32.676 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:32 compute-1 ceph-mon[80135]: pgmap v963: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 21:13:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:33.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:13:33 compute-1 nova_compute[230183]: 2025-11-23 21:13:33.727 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "c73efbfb-509e-4eb2-af63-a65ba0f98094" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:13:33 compute-1 nova_compute[230183]: 2025-11-23 21:13:33.727 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:13:33 compute-1 nova_compute[230183]: 2025-11-23 21:13:33.739 230187 DEBUG nova.compute.manager [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 23 21:13:33 compute-1 nova_compute[230183]: 2025-11-23 21:13:33.815 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:13:33 compute-1 nova_compute[230183]: 2025-11-23 21:13:33.816 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:13:33 compute-1 nova_compute[230183]: 2025-11-23 21:13:33.822 230187 DEBUG nova.virt.hardware [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 23 21:13:33 compute-1 nova_compute[230183]: 2025-11-23 21:13:33.822 230187 INFO nova.compute.claims [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Claim successful on node compute-1.ctlplane.example.com
Nov 23 21:13:33 compute-1 nova_compute[230183]: 2025-11-23 21:13:33.903 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:13:34 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:13:34 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1372269286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:13:34 compute-1 nova_compute[230183]: 2025-11-23 21:13:34.349 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:13:34 compute-1 nova_compute[230183]: 2025-11-23 21:13:34.360 230187 DEBUG nova.compute.provider_tree [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:13:34 compute-1 nova_compute[230183]: 2025-11-23 21:13:34.380 230187 DEBUG nova.scheduler.client.report [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:13:34 compute-1 nova_compute[230183]: 2025-11-23 21:13:34.405 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:13:34 compute-1 nova_compute[230183]: 2025-11-23 21:13:34.406 230187 DEBUG nova.compute.manager [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 23 21:13:34 compute-1 nova_compute[230183]: 2025-11-23 21:13:34.453 230187 DEBUG nova.compute.manager [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 23 21:13:34 compute-1 nova_compute[230183]: 2025-11-23 21:13:34.454 230187 DEBUG nova.network.neutron [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 23 21:13:34 compute-1 nova_compute[230183]: 2025-11-23 21:13:34.486 230187 INFO nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 23 21:13:34 compute-1 nova_compute[230183]: 2025-11-23 21:13:34.506 230187 DEBUG nova.compute.manager [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 23 21:13:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:13:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:34.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:13:34 compute-1 nova_compute[230183]: 2025-11-23 21:13:34.607 230187 DEBUG nova.compute.manager [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 23 21:13:34 compute-1 nova_compute[230183]: 2025-11-23 21:13:34.609 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 23 21:13:34 compute-1 nova_compute[230183]: 2025-11-23 21:13:34.610 230187 INFO nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Creating image(s)
Nov 23 21:13:34 compute-1 nova_compute[230183]: 2025-11-23 21:13:34.642 230187 DEBUG nova.storage.rbd_utils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c73efbfb-509e-4eb2-af63-a65ba0f98094_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:13:34 compute-1 nova_compute[230183]: 2025-11-23 21:13:34.672 230187 DEBUG nova.storage.rbd_utils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c73efbfb-509e-4eb2-af63-a65ba0f98094_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:13:34 compute-1 nova_compute[230183]: 2025-11-23 21:13:34.703 230187 DEBUG nova.storage.rbd_utils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c73efbfb-509e-4eb2-af63-a65ba0f98094_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:13:34 compute-1 ceph-mon[80135]: pgmap v964: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 21:13:34 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1372269286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:13:34 compute-1 nova_compute[230183]: 2025-11-23 21:13:34.707 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:13:34 compute-1 nova_compute[230183]: 2025-11-23 21:13:34.725 230187 DEBUG nova.policy [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9fb5352c62684f2ba3a326a953a10dfe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '782593db60784ab8bff41fe87d72ff5f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 23 21:13:34 compute-1 nova_compute[230183]: 2025-11-23 21:13:34.761 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:13:34 compute-1 nova_compute[230183]: 2025-11-23 21:13:34.761 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:13:34 compute-1 nova_compute[230183]: 2025-11-23 21:13:34.762 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:13:34 compute-1 nova_compute[230183]: 2025-11-23 21:13:34.762 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:13:34 compute-1 nova_compute[230183]: 2025-11-23 21:13:34.788 230187 DEBUG nova.storage.rbd_utils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c73efbfb-509e-4eb2-af63-a65ba0f98094_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:13:34 compute-1 nova_compute[230183]: 2025-11-23 21:13:34.791 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 c73efbfb-509e-4eb2-af63-a65ba0f98094_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:13:35 compute-1 nova_compute[230183]: 2025-11-23 21:13:35.084 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 c73efbfb-509e-4eb2-af63-a65ba0f98094_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:13:35 compute-1 nova_compute[230183]: 2025-11-23 21:13:35.148 230187 DEBUG nova.storage.rbd_utils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] resizing rbd image c73efbfb-509e-4eb2-af63-a65ba0f98094_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 23 21:13:35 compute-1 nova_compute[230183]: 2025-11-23 21:13:35.257 230187 DEBUG nova.objects.instance [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'migration_context' on Instance uuid c73efbfb-509e-4eb2-af63-a65ba0f98094 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:13:35 compute-1 nova_compute[230183]: 2025-11-23 21:13:35.279 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 23 21:13:35 compute-1 nova_compute[230183]: 2025-11-23 21:13:35.280 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Ensure instance console log exists: /var/lib/nova/instances/c73efbfb-509e-4eb2-af63-a65ba0f98094/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 23 21:13:35 compute-1 nova_compute[230183]: 2025-11-23 21:13:35.280 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:13:35 compute-1 nova_compute[230183]: 2025-11-23 21:13:35.281 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:13:35 compute-1 nova_compute[230183]: 2025-11-23 21:13:35.281 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:13:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:13:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:35.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.089 230187 DEBUG nova.network.neutron [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Successfully updated port: ba818b19-9f72-4242-b9d9-b1630b5d1f24 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.101 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "refresh_cache-c73efbfb-509e-4eb2-af63-a65ba0f98094" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.101 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquired lock "refresh_cache-c73efbfb-509e-4eb2-af63-a65ba0f98094" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.101 230187 DEBUG nova.network.neutron [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.170 230187 DEBUG nova.compute.manager [req-f5a0c52a-c69c-4e3a-a78e-133263309b78 req-69cf13fb-61d4-4a1e-991b-15f3ef31b11d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Received event network-changed-ba818b19-9f72-4242-b9d9-b1630b5d1f24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.171 230187 DEBUG nova.compute.manager [req-f5a0c52a-c69c-4e3a-a78e-133263309b78 req-69cf13fb-61d4-4a1e-991b-15f3ef31b11d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Refreshing instance network info cache due to event network-changed-ba818b19-9f72-4242-b9d9-b1630b5d1f24. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.171 230187 DEBUG oslo_concurrency.lockutils [req-f5a0c52a-c69c-4e3a-a78e-133263309b78 req-69cf13fb-61d4-4a1e-991b-15f3ef31b11d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-c73efbfb-509e-4eb2-af63-a65ba0f98094" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.242 230187 DEBUG nova.network.neutron [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 23 21:13:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:13:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:36.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:13:36 compute-1 ceph-mon[80135]: pgmap v965: 337 pgs: 337 active+clean; 43 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 92 KiB/s wr, 2 op/s
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.810 230187 DEBUG nova.network.neutron [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Updating instance_info_cache with network_info: [{"id": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "address": "fa:16:3e:0d:e6:fe", "network": {"id": "fd64d126-bc30-4f96-8737-9a4b1cf2fe8a", "bridge": "br-int", "label": "tempest-network-smoke--1300883220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba818b19-9f", "ovs_interfaceid": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.834 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Releasing lock "refresh_cache-c73efbfb-509e-4eb2-af63-a65ba0f98094" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.835 230187 DEBUG nova.compute.manager [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Instance network_info: |[{"id": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "address": "fa:16:3e:0d:e6:fe", "network": {"id": "fd64d126-bc30-4f96-8737-9a4b1cf2fe8a", "bridge": "br-int", "label": "tempest-network-smoke--1300883220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba818b19-9f", "ovs_interfaceid": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.835 230187 DEBUG oslo_concurrency.lockutils [req-f5a0c52a-c69c-4e3a-a78e-133263309b78 req-69cf13fb-61d4-4a1e-991b-15f3ef31b11d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-c73efbfb-509e-4eb2-af63-a65ba0f98094" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.835 230187 DEBUG nova.network.neutron [req-f5a0c52a-c69c-4e3a-a78e-133263309b78 req-69cf13fb-61d4-4a1e-991b-15f3ef31b11d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Refreshing network info cache for port ba818b19-9f72-4242-b9d9-b1630b5d1f24 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.839 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Start _get_guest_xml network_info=[{"id": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "address": "fa:16:3e:0d:e6:fe", "network": {"id": "fd64d126-bc30-4f96-8737-9a4b1cf2fe8a", "bridge": "br-int", "label": "tempest-network-smoke--1300883220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba818b19-9f", "ovs_interfaceid": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'guest_format': None, 'encrypted': False, 'size': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': '3c45fa6c-8a99-4359-a34e-d89f4e1e77d0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.844 230187 WARNING nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.852 230187 DEBUG nova.virt.libvirt.host [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.853 230187 DEBUG nova.virt.libvirt.host [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.857 230187 DEBUG nova.virt.libvirt.host [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.857 230187 DEBUG nova.virt.libvirt.host [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.858 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.858 230187 DEBUG nova.virt.hardware [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-23T21:05:42Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='56044b93-2979-48aa-b67f-c37e1b489306',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.859 230187 DEBUG nova.virt.hardware [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.859 230187 DEBUG nova.virt.hardware [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.859 230187 DEBUG nova.virt.hardware [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.860 230187 DEBUG nova.virt.hardware [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.860 230187 DEBUG nova.virt.hardware [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.860 230187 DEBUG nova.virt.hardware [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.860 230187 DEBUG nova.virt.hardware [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.861 230187 DEBUG nova.virt.hardware [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.861 230187 DEBUG nova.virt.hardware [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.862 230187 DEBUG nova.virt.hardware [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 23 21:13:36 compute-1 nova_compute[230183]: 2025-11-23 21:13:36.865 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:13:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:13:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 21:13:37 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2402362312' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.326 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.336 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.357 230187 DEBUG nova.storage.rbd_utils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c73efbfb-509e-4eb2-af63-a65ba0f98094_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.360 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:13:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:13:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:37.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.677 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:37 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2402362312' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:13:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 21:13:37 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/980794617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.783 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.785 230187 DEBUG nova.virt.libvirt.vif [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1142109245',display_name='tempest-TestNetworkBasicOps-server-1142109245',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1142109245',id=8,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA/OTvB3SF9HLz+tQB9k6+NtWY4GDi+dCLNTP2C1LVBWBWcF8hE2KwmFS1DV+sHHE6UrvKxVths55wvKBDKkRLk/bT3g1pE3soqIrQx5GQa2qNLkE7pPi6maRhw2rsAshw==',key_name='tempest-TestNetworkBasicOps-101179999',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-mndkc2jx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:13:34Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=c73efbfb-509e-4eb2-af63-a65ba0f98094,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "address": "fa:16:3e:0d:e6:fe", "network": {"id": "fd64d126-bc30-4f96-8737-9a4b1cf2fe8a", "bridge": "br-int", "label": "tempest-network-smoke--1300883220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba818b19-9f", "ovs_interfaceid": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.785 230187 DEBUG nova.network.os_vif_util [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "address": "fa:16:3e:0d:e6:fe", "network": {"id": "fd64d126-bc30-4f96-8737-9a4b1cf2fe8a", "bridge": "br-int", "label": "tempest-network-smoke--1300883220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba818b19-9f", "ovs_interfaceid": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.786 230187 DEBUG nova.network.os_vif_util [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:e6:fe,bridge_name='br-int',has_traffic_filtering=True,id=ba818b19-9f72-4242-b9d9-b1630b5d1f24,network=Network(fd64d126-bc30-4f96-8737-9a4b1cf2fe8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba818b19-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.788 230187 DEBUG nova.objects.instance [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'pci_devices' on Instance uuid c73efbfb-509e-4eb2-af63-a65ba0f98094 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.800 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] End _get_guest_xml xml=<domain type="kvm">
Nov 23 21:13:37 compute-1 nova_compute[230183]:   <uuid>c73efbfb-509e-4eb2-af63-a65ba0f98094</uuid>
Nov 23 21:13:37 compute-1 nova_compute[230183]:   <name>instance-00000008</name>
Nov 23 21:13:37 compute-1 nova_compute[230183]:   <memory>131072</memory>
Nov 23 21:13:37 compute-1 nova_compute[230183]:   <vcpu>1</vcpu>
Nov 23 21:13:37 compute-1 nova_compute[230183]:   <metadata>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <nova:name>tempest-TestNetworkBasicOps-server-1142109245</nova:name>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <nova:creationTime>2025-11-23 21:13:36</nova:creationTime>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <nova:flavor name="m1.nano">
Nov 23 21:13:37 compute-1 nova_compute[230183]:         <nova:memory>128</nova:memory>
Nov 23 21:13:37 compute-1 nova_compute[230183]:         <nova:disk>1</nova:disk>
Nov 23 21:13:37 compute-1 nova_compute[230183]:         <nova:swap>0</nova:swap>
Nov 23 21:13:37 compute-1 nova_compute[230183]:         <nova:ephemeral>0</nova:ephemeral>
Nov 23 21:13:37 compute-1 nova_compute[230183]:         <nova:vcpus>1</nova:vcpus>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       </nova:flavor>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <nova:owner>
Nov 23 21:13:37 compute-1 nova_compute[230183]:         <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 21:13:37 compute-1 nova_compute[230183]:         <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       </nova:owner>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <nova:ports>
Nov 23 21:13:37 compute-1 nova_compute[230183]:         <nova:port uuid="ba818b19-9f72-4242-b9d9-b1630b5d1f24">
Nov 23 21:13:37 compute-1 nova_compute[230183]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:         </nova:port>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       </nova:ports>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     </nova:instance>
Nov 23 21:13:37 compute-1 nova_compute[230183]:   </metadata>
Nov 23 21:13:37 compute-1 nova_compute[230183]:   <sysinfo type="smbios">
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <system>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <entry name="manufacturer">RDO</entry>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <entry name="product">OpenStack Compute</entry>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <entry name="serial">c73efbfb-509e-4eb2-af63-a65ba0f98094</entry>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <entry name="uuid">c73efbfb-509e-4eb2-af63-a65ba0f98094</entry>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <entry name="family">Virtual Machine</entry>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     </system>
Nov 23 21:13:37 compute-1 nova_compute[230183]:   </sysinfo>
Nov 23 21:13:37 compute-1 nova_compute[230183]:   <os>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <boot dev="hd"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <smbios mode="sysinfo"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:   </os>
Nov 23 21:13:37 compute-1 nova_compute[230183]:   <features>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <acpi/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <apic/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <vmcoreinfo/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:   </features>
Nov 23 21:13:37 compute-1 nova_compute[230183]:   <clock offset="utc">
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <timer name="pit" tickpolicy="delay"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <timer name="hpet" present="no"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:   </clock>
Nov 23 21:13:37 compute-1 nova_compute[230183]:   <cpu mode="host-model" match="exact">
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <topology sockets="1" cores="1" threads="1"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:   </cpu>
Nov 23 21:13:37 compute-1 nova_compute[230183]:   <devices>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <disk type="network" device="disk">
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <driver type="raw" cache="none"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <source protocol="rbd" name="vms/c73efbfb-509e-4eb2-af63-a65ba0f98094_disk">
Nov 23 21:13:37 compute-1 nova_compute[230183]:         <host name="192.168.122.100" port="6789"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:         <host name="192.168.122.102" port="6789"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:         <host name="192.168.122.101" port="6789"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       </source>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <auth username="openstack">
Nov 23 21:13:37 compute-1 nova_compute[230183]:         <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <target dev="vda" bus="virtio"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <disk type="network" device="cdrom">
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <driver type="raw" cache="none"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <source protocol="rbd" name="vms/c73efbfb-509e-4eb2-af63-a65ba0f98094_disk.config">
Nov 23 21:13:37 compute-1 nova_compute[230183]:         <host name="192.168.122.100" port="6789"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:         <host name="192.168.122.102" port="6789"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:         <host name="192.168.122.101" port="6789"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       </source>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <auth username="openstack">
Nov 23 21:13:37 compute-1 nova_compute[230183]:         <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <target dev="sda" bus="sata"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <interface type="ethernet">
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <mac address="fa:16:3e:0d:e6:fe"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <model type="virtio"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <driver name="vhost" rx_queue_size="512"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <mtu size="1442"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <target dev="tapba818b19-9f"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     </interface>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <serial type="pty">
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <log file="/var/lib/nova/instances/c73efbfb-509e-4eb2-af63-a65ba0f98094/console.log" append="off"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     </serial>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <video>
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <model type="virtio"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     </video>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <input type="tablet" bus="usb"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <rng model="virtio">
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <backend model="random">/dev/urandom</backend>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     </rng>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <controller type="usb" index="0"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     <memballoon model="virtio">
Nov 23 21:13:37 compute-1 nova_compute[230183]:       <stats period="10"/>
Nov 23 21:13:37 compute-1 nova_compute[230183]:     </memballoon>
Nov 23 21:13:37 compute-1 nova_compute[230183]:   </devices>
Nov 23 21:13:37 compute-1 nova_compute[230183]: </domain>
Nov 23 21:13:37 compute-1 nova_compute[230183]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.803 230187 DEBUG nova.compute.manager [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Preparing to wait for external event network-vif-plugged-ba818b19-9f72-4242-b9d9-b1630b5d1f24 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.803 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.803 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.804 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.804 230187 DEBUG nova.virt.libvirt.vif [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1142109245',display_name='tempest-TestNetworkBasicOps-server-1142109245',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1142109245',id=8,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA/OTvB3SF9HLz+tQB9k6+NtWY4GDi+dCLNTP2C1LVBWBWcF8hE2KwmFS1DV+sHHE6UrvKxVths55wvKBDKkRLk/bT3g1pE3soqIrQx5GQa2qNLkE7pPi6maRhw2rsAshw==',key_name='tempest-TestNetworkBasicOps-101179999',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-mndkc2jx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:13:34Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=c73efbfb-509e-4eb2-af63-a65ba0f98094,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "address": "fa:16:3e:0d:e6:fe", "network": {"id": "fd64d126-bc30-4f96-8737-9a4b1cf2fe8a", "bridge": "br-int", "label": "tempest-network-smoke--1300883220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba818b19-9f", "ovs_interfaceid": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.805 230187 DEBUG nova.network.os_vif_util [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "address": "fa:16:3e:0d:e6:fe", "network": {"id": "fd64d126-bc30-4f96-8737-9a4b1cf2fe8a", "bridge": "br-int", "label": "tempest-network-smoke--1300883220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba818b19-9f", "ovs_interfaceid": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.805 230187 DEBUG nova.network.os_vif_util [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:e6:fe,bridge_name='br-int',has_traffic_filtering=True,id=ba818b19-9f72-4242-b9d9-b1630b5d1f24,network=Network(fd64d126-bc30-4f96-8737-9a4b1cf2fe8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba818b19-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.806 230187 DEBUG os_vif [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:e6:fe,bridge_name='br-int',has_traffic_filtering=True,id=ba818b19-9f72-4242-b9d9-b1630b5d1f24,network=Network(fd64d126-bc30-4f96-8737-9a4b1cf2fe8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba818b19-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.807 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.807 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.808 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.811 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.811 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba818b19-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.812 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapba818b19-9f, col_values=(('external_ids', {'iface-id': 'ba818b19-9f72-4242-b9d9-b1630b5d1f24', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0d:e6:fe', 'vm-uuid': 'c73efbfb-509e-4eb2-af63-a65ba0f98094'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.814 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:37 compute-1 NetworkManager[49021]: <info>  [1763932417.8150] manager: (tapba818b19-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.815 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.822 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.823 230187 INFO os_vif [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:e6:fe,bridge_name='br-int',has_traffic_filtering=True,id=ba818b19-9f72-4242-b9d9-b1630b5d1f24,network=Network(fd64d126-bc30-4f96-8737-9a4b1cf2fe8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba818b19-9f')
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.884 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.885 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.885 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No VIF found with MAC fa:16:3e:0d:e6:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.886 230187 INFO nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Using config drive
Nov 23 21:13:37 compute-1 nova_compute[230183]: 2025-11-23 21:13:37.916 230187 DEBUG nova.storage.rbd_utils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c73efbfb-509e-4eb2-af63-a65ba0f98094_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:13:38 compute-1 nova_compute[230183]: 2025-11-23 21:13:38.267 230187 DEBUG nova.network.neutron [req-f5a0c52a-c69c-4e3a-a78e-133263309b78 req-69cf13fb-61d4-4a1e-991b-15f3ef31b11d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Updated VIF entry in instance network info cache for port ba818b19-9f72-4242-b9d9-b1630b5d1f24. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 23 21:13:38 compute-1 nova_compute[230183]: 2025-11-23 21:13:38.268 230187 DEBUG nova.network.neutron [req-f5a0c52a-c69c-4e3a-a78e-133263309b78 req-69cf13fb-61d4-4a1e-991b-15f3ef31b11d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Updating instance_info_cache with network_info: [{"id": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "address": "fa:16:3e:0d:e6:fe", "network": {"id": "fd64d126-bc30-4f96-8737-9a4b1cf2fe8a", "bridge": "br-int", "label": "tempest-network-smoke--1300883220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba818b19-9f", "ovs_interfaceid": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:13:38 compute-1 nova_compute[230183]: 2025-11-23 21:13:38.283 230187 DEBUG oslo_concurrency.lockutils [req-f5a0c52a-c69c-4e3a-a78e-133263309b78 req-69cf13fb-61d4-4a1e-991b-15f3ef31b11d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-c73efbfb-509e-4eb2-af63-a65ba0f98094" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:13:38 compute-1 nova_compute[230183]: 2025-11-23 21:13:38.392 230187 INFO nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Creating config drive at /var/lib/nova/instances/c73efbfb-509e-4eb2-af63-a65ba0f98094/disk.config
Nov 23 21:13:38 compute-1 nova_compute[230183]: 2025-11-23 21:13:38.397 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c73efbfb-509e-4eb2-af63-a65ba0f98094/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgpnhd6l3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:13:38 compute-1 nova_compute[230183]: 2025-11-23 21:13:38.522 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c73efbfb-509e-4eb2-af63-a65ba0f98094/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgpnhd6l3" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:13:38 compute-1 nova_compute[230183]: 2025-11-23 21:13:38.560 230187 DEBUG nova.storage.rbd_utils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c73efbfb-509e-4eb2-af63-a65ba0f98094_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:13:38 compute-1 nova_compute[230183]: 2025-11-23 21:13:38.564 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c73efbfb-509e-4eb2-af63-a65ba0f98094/disk.config c73efbfb-509e-4eb2-af63-a65ba0f98094_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:13:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:38.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:39 compute-1 ceph-mon[80135]: pgmap v966: 337 pgs: 337 active+clean; 43 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 92 KiB/s wr, 1 op/s
Nov 23 21:13:39 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/980794617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:13:39 compute-1 nova_compute[230183]: 2025-11-23 21:13:39.119 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c73efbfb-509e-4eb2-af63-a65ba0f98094/disk.config c73efbfb-509e-4eb2-af63-a65ba0f98094_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:13:39 compute-1 nova_compute[230183]: 2025-11-23 21:13:39.121 230187 INFO nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Deleting local config drive /var/lib/nova/instances/c73efbfb-509e-4eb2-af63-a65ba0f98094/disk.config because it was imported into RBD.
Nov 23 21:13:39 compute-1 systemd[1]: Starting libvirt secret daemon...
Nov 23 21:13:39 compute-1 systemd[1]: Started libvirt secret daemon.
Nov 23 21:13:39 compute-1 kernel: tapba818b19-9f: entered promiscuous mode
Nov 23 21:13:39 compute-1 nova_compute[230183]: 2025-11-23 21:13:39.231 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:39 compute-1 ovn_controller[132845]: 2025-11-23T21:13:39Z|00094|binding|INFO|Claiming lport ba818b19-9f72-4242-b9d9-b1630b5d1f24 for this chassis.
Nov 23 21:13:39 compute-1 ovn_controller[132845]: 2025-11-23T21:13:39Z|00095|binding|INFO|ba818b19-9f72-4242-b9d9-b1630b5d1f24: Claiming fa:16:3e:0d:e6:fe 10.100.0.12
Nov 23 21:13:39 compute-1 NetworkManager[49021]: <info>  [1763932419.2333] manager: (tapba818b19-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Nov 23 21:13:39 compute-1 nova_compute[230183]: 2025-11-23 21:13:39.237 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:39 compute-1 nova_compute[230183]: 2025-11-23 21:13:39.240 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:39 compute-1 nova_compute[230183]: 2025-11-23 21:13:39.246 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.258 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:e6:fe 10.100.0.12'], port_security=['fa:16:3e:0d:e6:fe 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1655123038', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c73efbfb-509e-4eb2-af63-a65ba0f98094', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1655123038', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cfd1f7f1-25d4-42fe-ac59-ece898bff9bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1bc3d174-1770-40d5-b0cb-7f310bc5e484, chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=ba818b19-9f72-4242-b9d9-b1630b5d1f24) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:13:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.259 142158 INFO neutron.agent.ovn.metadata.agent [-] Port ba818b19-9f72-4242-b9d9-b1630b5d1f24 in datapath fd64d126-bc30-4f96-8737-9a4b1cf2fe8a bound to our chassis
Nov 23 21:13:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.260 142158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fd64d126-bc30-4f96-8737-9a4b1cf2fe8a
Nov 23 21:13:39 compute-1 systemd-machined[193469]: New machine qemu-5-instance-00000008.
Nov 23 21:13:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.277 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[0e051fbe-e9dc-4bd5-8573-04a10d75a747]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.278 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfd64d126-b1 in ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 23 21:13:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.280 233901 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfd64d126-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 23 21:13:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.280 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[380f25ae-9888-45cd-b83d-8df1b27d6183]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.281 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[1273177f-1a93-498b-b9ae-ff1673cb6221]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.297 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[0144d959-b2cb-40e6-ba43-74e0e93e2237]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:39 compute-1 ovn_controller[132845]: 2025-11-23T21:13:39Z|00096|binding|INFO|Setting lport ba818b19-9f72-4242-b9d9-b1630b5d1f24 ovn-installed in OVS
Nov 23 21:13:39 compute-1 ovn_controller[132845]: 2025-11-23T21:13:39Z|00097|binding|INFO|Setting lport ba818b19-9f72-4242-b9d9-b1630b5d1f24 up in Southbound
Nov 23 21:13:39 compute-1 nova_compute[230183]: 2025-11-23 21:13:39.311 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:39 compute-1 systemd[1]: Started Virtual Machine qemu-5-instance-00000008.
Nov 23 21:13:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.324 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[531c1a0b-f1d6-479a-8f42-3b9e5d5238a4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.352 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[abadc56e-bd4a-41e5-9cfc-3364bdbe713e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:39 compute-1 NetworkManager[49021]: <info>  [1763932419.3609] manager: (tapfd64d126-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Nov 23 21:13:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.358 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[2de9d64e-d55d-4097-b6df-eeee03a6a59a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:39 compute-1 systemd-udevd[240600]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 21:13:39 compute-1 systemd-udevd[240598]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 21:13:39 compute-1 NetworkManager[49021]: <info>  [1763932419.3807] device (tapba818b19-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 21:13:39 compute-1 NetworkManager[49021]: <info>  [1763932419.3835] device (tapba818b19-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 23 21:13:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.395 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[6135d58b-fd8d-4f6b-b8ed-c1081d4d6422]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.398 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[7723041d-7d86-45b6-96d4-af5f0927ba6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.434 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[df798297-f675-40e2-a747-26c2dfb4209a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.457 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[242806cb-89cb-4331-9b63-4254eb2d8c86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd64d126-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:c5:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436800, 'reachable_time': 35556, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240625, 'error': None, 'target': 'ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.474 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[a125dcde-24cf-477c-84bb-b718fd2b7c8b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe80:c5fb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 436800, 'tstamp': 436800}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240626, 'error': None, 'target': 'ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:39 compute-1 nova_compute[230183]: 2025-11-23 21:13:39.475 230187 DEBUG nova.compute.manager [req-5d5cce07-7127-4145-a7a9-7f0813cf62ce req-248afa77-1aea-4d95-846d-e4e4843dfd15 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Received event network-vif-plugged-ba818b19-9f72-4242-b9d9-b1630b5d1f24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:13:39 compute-1 nova_compute[230183]: 2025-11-23 21:13:39.476 230187 DEBUG oslo_concurrency.lockutils [req-5d5cce07-7127-4145-a7a9-7f0813cf62ce req-248afa77-1aea-4d95-846d-e4e4843dfd15 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:13:39 compute-1 nova_compute[230183]: 2025-11-23 21:13:39.477 230187 DEBUG oslo_concurrency.lockutils [req-5d5cce07-7127-4145-a7a9-7f0813cf62ce req-248afa77-1aea-4d95-846d-e4e4843dfd15 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:13:39 compute-1 nova_compute[230183]: 2025-11-23 21:13:39.478 230187 DEBUG oslo_concurrency.lockutils [req-5d5cce07-7127-4145-a7a9-7f0813cf62ce req-248afa77-1aea-4d95-846d-e4e4843dfd15 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:13:39 compute-1 nova_compute[230183]: 2025-11-23 21:13:39.478 230187 DEBUG nova.compute.manager [req-5d5cce07-7127-4145-a7a9-7f0813cf62ce req-248afa77-1aea-4d95-846d-e4e4843dfd15 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Processing event network-vif-plugged-ba818b19-9f72-4242-b9d9-b1630b5d1f24 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 23 21:13:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:39.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:39 compute-1 NetworkManager[49021]: <info>  [1763932419.9641] device (tapfd64d126-b0): carrier: link connected
Nov 23 21:13:39 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.979 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[376e4596-ddc4-427c-a5c0-b83fde40941c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd64d126-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:c5:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436800, 'reachable_time': 35556, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 240628, 'error': None, 'target': 'ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:40.020 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[fa4b53aa-2b9a-4228-89cc-66687a4a8460]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:40 compute-1 ceph-mon[80135]: pgmap v967: 337 pgs: 337 active+clean; 51 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 372 KiB/s wr, 5 op/s
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:40.076 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[4db8a726-fdb7-4cf9-aa5e-5d95b6053bc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:40.077 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd64d126-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:40.077 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:40.078 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd64d126-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.079 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:40 compute-1 kernel: tapfd64d126-b0: entered promiscuous mode
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.082 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:40 compute-1 NetworkManager[49021]: <info>  [1763932420.0838] manager: (tapfd64d126-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:40.083 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfd64d126-b0, col_values=(('external_ids', {'iface-id': '6ab19126-935d-4e09-a163-fbca05fb1c6f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.084 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:40 compute-1 ovn_controller[132845]: 2025-11-23T21:13:40Z|00098|binding|INFO|Releasing lport 6ab19126-935d-4e09-a163-fbca05fb1c6f from this chassis (sb_readonly=0)
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.085 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:40.086 142158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fd64d126-bc30-4f96-8737-9a4b1cf2fe8a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fd64d126-bc30-4f96-8737-9a4b1cf2fe8a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:40.087 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5b84d008-22a6-44bf-8806-627cc78a04fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:40.088 142158 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]: global
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]:     log         /dev/log local0 debug
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]:     log-tag     haproxy-metadata-proxy-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]:     user        root
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]:     group       root
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]:     maxconn     1024
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]:     pidfile     /var/lib/neutron/external/pids/fd64d126-bc30-4f96-8737-9a4b1cf2fe8a.pid.haproxy
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]:     daemon
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]: defaults
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]:     log global
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]:     mode http
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]:     option httplog
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]:     option dontlognull
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]:     option http-server-close
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]:     option forwardfor
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]:     retries                 3
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]:     timeout http-request    30s
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]:     timeout connect         30s
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]:     timeout client          32s
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]:     timeout server          32s
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]:     timeout http-keep-alive 30s
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]: listen listener
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]:     bind 169.254.169.254:80
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]:     server metadata /var/lib/neutron/metadata_proxy
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]:     http-request add-header X-OVN-Network-ID fd64d126-bc30-4f96-8737-9a4b1cf2fe8a
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 23 21:13:40 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:40.089 142158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a', 'env', 'PROCESS_TAG=haproxy-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fd64d126-bc30-4f96-8737-9a4b1cf2fe8a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.104 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.281 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932420.2803879, c73efbfb-509e-4eb2-af63-a65ba0f98094 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.282 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] VM Started (Lifecycle Event)
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.285 230187 DEBUG nova.compute.manager [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.288 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.292 230187 INFO nova.virt.libvirt.driver [-] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Instance spawned successfully.
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.292 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.306 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.311 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.315 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.315 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.316 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.316 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.316 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.317 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.337 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.337 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932420.2815745, c73efbfb-509e-4eb2-af63-a65ba0f98094 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.338 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] VM Paused (Lifecycle Event)
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.357 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.360 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932420.287681, c73efbfb-509e-4eb2-af63-a65ba0f98094 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.360 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] VM Resumed (Lifecycle Event)
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.367 230187 INFO nova.compute.manager [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Took 5.76 seconds to spawn the instance on the hypervisor.
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.367 230187 DEBUG nova.compute.manager [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.375 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.378 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.407 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.425 230187 INFO nova.compute.manager [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Took 6.64 seconds to build instance.
Nov 23 21:13:40 compute-1 nova_compute[230183]: 2025-11-23 21:13:40.437 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:13:40 compute-1 podman[240700]: 2025-11-23 21:13:40.461708602 +0000 UTC m=+0.057271509 container create aed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 23 21:13:40 compute-1 systemd[1]: Started libpod-conmon-aed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a.scope.
Nov 23 21:13:40 compute-1 systemd[1]: Started libcrun container.
Nov 23 21:13:40 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c3f4166f87894fa9c10f5c59d20e17390aa56344dd4fc7f9dd0c49158d6a062/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 21:13:40 compute-1 podman[240700]: 2025-11-23 21:13:40.435193855 +0000 UTC m=+0.030756782 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 21:13:40 compute-1 podman[240700]: 2025-11-23 21:13:40.540739571 +0000 UTC m=+0.136302508 container init aed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 23 21:13:40 compute-1 podman[240700]: 2025-11-23 21:13:40.545394575 +0000 UTC m=+0.140957482 container start aed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 21:13:40 compute-1 neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a[240715]: [NOTICE]   (240719) : New worker (240721) forked
Nov 23 21:13:40 compute-1 neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a[240715]: [NOTICE]   (240719) : Loading success.
Nov 23 21:13:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:13:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:40.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.612778) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932420612818, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1761, "num_deletes": 257, "total_data_size": 4531538, "memory_usage": 4623792, "flush_reason": "Manual Compaction"}
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932420631136, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 2940573, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28502, "largest_seqno": 30258, "table_properties": {"data_size": 2933288, "index_size": 4228, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15040, "raw_average_key_size": 19, "raw_value_size": 2918711, "raw_average_value_size": 3790, "num_data_blocks": 186, "num_entries": 770, "num_filter_entries": 770, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763932273, "oldest_key_time": 1763932273, "file_creation_time": 1763932420, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 18400 microseconds, and 5433 cpu microseconds.
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.631176) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 2940573 bytes OK
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.631193) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.632355) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.632376) EVENT_LOG_v1 {"time_micros": 1763932420632369, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.632397) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 4523514, prev total WAL file size 4523514, number of live WAL files 2.
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.633700) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373534' seq:0, type:0; will stop at (end)
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(2871KB)], [54(14MB)]
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932420633725, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 17869174, "oldest_snapshot_seqno": -1}
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 6085 keys, 17720577 bytes, temperature: kUnknown
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932420742407, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 17720577, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17676197, "index_size": 28078, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15237, "raw_key_size": 154804, "raw_average_key_size": 25, "raw_value_size": 17562991, "raw_average_value_size": 2886, "num_data_blocks": 1153, "num_entries": 6085, "num_filter_entries": 6085, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763932420, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.742603) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 17720577 bytes
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.746059) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.3 rd, 163.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 14.2 +0.0 blob) out(16.9 +0.0 blob), read-write-amplify(12.1) write-amplify(6.0) OK, records in: 6617, records dropped: 532 output_compression: NoCompression
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.746077) EVENT_LOG_v1 {"time_micros": 1763932420746070, "job": 32, "event": "compaction_finished", "compaction_time_micros": 108740, "compaction_time_cpu_micros": 40678, "output_level": 6, "num_output_files": 1, "total_output_size": 17720577, "num_input_records": 6617, "num_output_records": 6085, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932420746643, "job": 32, "event": "table_file_deletion", "file_number": 56}
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932420748998, "job": 32, "event": "table_file_deletion", "file_number": 54}
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.633640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.749072) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.749077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.749079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.749080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:13:40 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.749082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:13:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:41.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:41 compute-1 nova_compute[230183]: 2025-11-23 21:13:41.558 230187 DEBUG nova.compute.manager [req-509eb8d3-cd45-476f-93ab-c382ec61f97a req-5d8edfaf-3589-4218-90b9-aae3742e6e19 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Received event network-vif-plugged-ba818b19-9f72-4242-b9d9-b1630b5d1f24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:13:41 compute-1 nova_compute[230183]: 2025-11-23 21:13:41.558 230187 DEBUG oslo_concurrency.lockutils [req-509eb8d3-cd45-476f-93ab-c382ec61f97a req-5d8edfaf-3589-4218-90b9-aae3742e6e19 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:13:41 compute-1 nova_compute[230183]: 2025-11-23 21:13:41.558 230187 DEBUG oslo_concurrency.lockutils [req-509eb8d3-cd45-476f-93ab-c382ec61f97a req-5d8edfaf-3589-4218-90b9-aae3742e6e19 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:13:41 compute-1 nova_compute[230183]: 2025-11-23 21:13:41.559 230187 DEBUG oslo_concurrency.lockutils [req-509eb8d3-cd45-476f-93ab-c382ec61f97a req-5d8edfaf-3589-4218-90b9-aae3742e6e19 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:13:41 compute-1 nova_compute[230183]: 2025-11-23 21:13:41.560 230187 DEBUG nova.compute.manager [req-509eb8d3-cd45-476f-93ab-c382ec61f97a req-5d8edfaf-3589-4218-90b9-aae3742e6e19 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] No waiting events found dispatching network-vif-plugged-ba818b19-9f72-4242-b9d9-b1630b5d1f24 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:13:41 compute-1 nova_compute[230183]: 2025-11-23 21:13:41.560 230187 WARNING nova.compute.manager [req-509eb8d3-cd45-476f-93ab-c382ec61f97a req-5d8edfaf-3589-4218-90b9-aae3742e6e19 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Received unexpected event network-vif-plugged-ba818b19-9f72-4242-b9d9-b1630b5d1f24 for instance with vm_state active and task_state None.
Nov 23 21:13:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:13:42 compute-1 nova_compute[230183]: 2025-11-23 21:13:42.329 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:13:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:42.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:13:42 compute-1 ceph-mon[80135]: pgmap v968: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 23 21:13:42 compute-1 nova_compute[230183]: 2025-11-23 21:13:42.814 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.112 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:43 compute-1 NetworkManager[49021]: <info>  [1763932423.1148] manager: (patch-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Nov 23 21:13:43 compute-1 ovn_controller[132845]: 2025-11-23T21:13:43Z|00099|binding|INFO|Releasing lport 6ab19126-935d-4e09-a163-fbca05fb1c6f from this chassis (sb_readonly=0)
Nov 23 21:13:43 compute-1 NetworkManager[49021]: <info>  [1763932423.1159] manager: (patch-br-int-to-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Nov 23 21:13:43 compute-1 ovn_controller[132845]: 2025-11-23T21:13:43Z|00100|binding|INFO|Releasing lport 6ab19126-935d-4e09-a163-fbca05fb1c6f from this chassis (sb_readonly=0)
Nov 23 21:13:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:43.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.586 230187 DEBUG oslo_concurrency.lockutils [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "c73efbfb-509e-4eb2-af63-a65ba0f98094" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.587 230187 DEBUG oslo_concurrency.lockutils [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.587 230187 DEBUG oslo_concurrency.lockutils [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.588 230187 DEBUG oslo_concurrency.lockutils [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.588 230187 DEBUG oslo_concurrency.lockutils [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.589 230187 INFO nova.compute.manager [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Terminating instance
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.589 230187 DEBUG nova.compute.manager [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 23 21:13:43 compute-1 kernel: tapba818b19-9f (unregistering): left promiscuous mode
Nov 23 21:13:43 compute-1 NetworkManager[49021]: <info>  [1763932423.6294] device (tapba818b19-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 23 21:13:43 compute-1 ovn_controller[132845]: 2025-11-23T21:13:43Z|00101|binding|INFO|Releasing lport ba818b19-9f72-4242-b9d9-b1630b5d1f24 from this chassis (sb_readonly=0)
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.643 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:43 compute-1 ovn_controller[132845]: 2025-11-23T21:13:43Z|00102|binding|INFO|Setting lport ba818b19-9f72-4242-b9d9-b1630b5d1f24 down in Southbound
Nov 23 21:13:43 compute-1 ovn_controller[132845]: 2025-11-23T21:13:43Z|00103|binding|INFO|Removing iface tapba818b19-9f ovn-installed in OVS
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.647 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.669 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:43 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.673 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:e6:fe 10.100.0.12'], port_security=['fa:16:3e:0d:e6:fe 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1655123038', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c73efbfb-509e-4eb2-af63-a65ba0f98094', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1655123038', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cfd1f7f1-25d4-42fe-ac59-ece898bff9bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1bc3d174-1770-40d5-b0cb-7f310bc5e484, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=ba818b19-9f72-4242-b9d9-b1630b5d1f24) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:13:43 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.675 142158 INFO neutron.agent.ovn.metadata.agent [-] Port ba818b19-9f72-4242-b9d9-b1630b5d1f24 in datapath fd64d126-bc30-4f96-8737-9a4b1cf2fe8a unbound from our chassis
Nov 23 21:13:43 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.676 142158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fd64d126-bc30-4f96-8737-9a4b1cf2fe8a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 23 21:13:43 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.678 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[73a18728-34e4-46a8-8f75-942d1b2bfc6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:43 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.679 142158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a namespace which is not needed anymore
Nov 23 21:13:43 compute-1 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000008.scope: Deactivated successfully.
Nov 23 21:13:43 compute-1 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000008.scope: Consumed 4.350s CPU time.
Nov 23 21:13:43 compute-1 systemd-machined[193469]: Machine qemu-5-instance-00000008 terminated.
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.720 230187 DEBUG nova.compute.manager [req-78eceada-8483-4cb1-ada7-74c934ec2e8e req-6307c7ca-8b79-43ac-8d52-4b7e37ec93f1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Received event network-changed-ba818b19-9f72-4242-b9d9-b1630b5d1f24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.720 230187 DEBUG nova.compute.manager [req-78eceada-8483-4cb1-ada7-74c934ec2e8e req-6307c7ca-8b79-43ac-8d52-4b7e37ec93f1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Refreshing instance network info cache due to event network-changed-ba818b19-9f72-4242-b9d9-b1630b5d1f24. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.721 230187 DEBUG oslo_concurrency.lockutils [req-78eceada-8483-4cb1-ada7-74c934ec2e8e req-6307c7ca-8b79-43ac-8d52-4b7e37ec93f1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-c73efbfb-509e-4eb2-af63-a65ba0f98094" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.721 230187 DEBUG oslo_concurrency.lockutils [req-78eceada-8483-4cb1-ada7-74c934ec2e8e req-6307c7ca-8b79-43ac-8d52-4b7e37ec93f1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-c73efbfb-509e-4eb2-af63-a65ba0f98094" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.721 230187 DEBUG nova.network.neutron [req-78eceada-8483-4cb1-ada7-74c934ec2e8e req-6307c7ca-8b79-43ac-8d52-4b7e37ec93f1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Refreshing network info cache for port ba818b19-9f72-4242-b9d9-b1630b5d1f24 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 23 21:13:43 compute-1 neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a[240715]: [NOTICE]   (240719) : haproxy version is 2.8.14-c23fe91
Nov 23 21:13:43 compute-1 neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a[240715]: [NOTICE]   (240719) : path to executable is /usr/sbin/haproxy
Nov 23 21:13:43 compute-1 neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a[240715]: [WARNING]  (240719) : Exiting Master process...
Nov 23 21:13:43 compute-1 neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a[240715]: [ALERT]    (240719) : Current worker (240721) exited with code 143 (Terminated)
Nov 23 21:13:43 compute-1 neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a[240715]: [WARNING]  (240719) : All workers exited. Exiting... (0)
Nov 23 21:13:43 compute-1 systemd[1]: libpod-aed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a.scope: Deactivated successfully.
Nov 23 21:13:43 compute-1 conmon[240715]: conmon aed257386fa7168d7aaf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a.scope/container/memory.events
Nov 23 21:13:43 compute-1 podman[240756]: 2025-11-23 21:13:43.823392025 +0000 UTC m=+0.048601799 container died aed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.829 230187 INFO nova.virt.libvirt.driver [-] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Instance destroyed successfully.
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.829 230187 DEBUG nova.objects.instance [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'resources' on Instance uuid c73efbfb-509e-4eb2-af63-a65ba0f98094 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.842 230187 DEBUG nova.virt.libvirt.vif [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1142109245',display_name='tempest-TestNetworkBasicOps-server-1142109245',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1142109245',id=8,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA/OTvB3SF9HLz+tQB9k6+NtWY4GDi+dCLNTP2C1LVBWBWcF8hE2KwmFS1DV+sHHE6UrvKxVths55wvKBDKkRLk/bT3g1pE3soqIrQx5GQa2qNLkE7pPi6maRhw2rsAshw==',key_name='tempest-TestNetworkBasicOps-101179999',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:13:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-mndkc2jx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:13:40Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=c73efbfb-509e-4eb2-af63-a65ba0f98094,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "address": "fa:16:3e:0d:e6:fe", "network": {"id": "fd64d126-bc30-4f96-8737-9a4b1cf2fe8a", "bridge": "br-int", "label": "tempest-network-smoke--1300883220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba818b19-9f", "ovs_interfaceid": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.843 230187 DEBUG nova.network.os_vif_util [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "address": "fa:16:3e:0d:e6:fe", "network": {"id": "fd64d126-bc30-4f96-8737-9a4b1cf2fe8a", "bridge": "br-int", "label": "tempest-network-smoke--1300883220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba818b19-9f", "ovs_interfaceid": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.845 230187 DEBUG nova.network.os_vif_util [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:e6:fe,bridge_name='br-int',has_traffic_filtering=True,id=ba818b19-9f72-4242-b9d9-b1630b5d1f24,network=Network(fd64d126-bc30-4f96-8737-9a4b1cf2fe8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba818b19-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.846 230187 DEBUG os_vif [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:e6:fe,bridge_name='br-int',has_traffic_filtering=True,id=ba818b19-9f72-4242-b9d9-b1630b5d1f24,network=Network(fd64d126-bc30-4f96-8737-9a4b1cf2fe8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba818b19-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.849 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.849 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba818b19-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.851 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:43 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a-userdata-shm.mount: Deactivated successfully.
Nov 23 21:13:43 compute-1 systemd[1]: var-lib-containers-storage-overlay-9c3f4166f87894fa9c10f5c59d20e17390aa56344dd4fc7f9dd0c49158d6a062-merged.mount: Deactivated successfully.
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.859 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.861 230187 INFO os_vif [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:e6:fe,bridge_name='br-int',has_traffic_filtering=True,id=ba818b19-9f72-4242-b9d9-b1630b5d1f24,network=Network(fd64d126-bc30-4f96-8737-9a4b1cf2fe8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba818b19-9f')
Nov 23 21:13:43 compute-1 podman[240756]: 2025-11-23 21:13:43.867182773 +0000 UTC m=+0.092392567 container cleanup aed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 21:13:43 compute-1 systemd[1]: libpod-conmon-aed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a.scope: Deactivated successfully.
Nov 23 21:13:43 compute-1 podman[240808]: 2025-11-23 21:13:43.931044517 +0000 UTC m=+0.045267580 container remove aed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 21:13:43 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.936 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[c21bd24c-2c13-4e5c-a611-bb002d3a8fa2]: (4, ('Sun Nov 23 09:13:43 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a (aed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a)\naed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a\nSun Nov 23 09:13:43 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a (aed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a)\naed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:43 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.938 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[e91a99b3-a468-48c2-9fa2-adb9afc4e4ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:43 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.939 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd64d126-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:13:43 compute-1 kernel: tapfd64d126-b0: left promiscuous mode
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.941 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:43 compute-1 nova_compute[230183]: 2025-11-23 21:13:43.953 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:43 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.956 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[aa732f42-46c8-400d-89e2-8fa4f226f16e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:43 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.974 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[21c94eb2-e8ff-4eb8-a289-286cffb259e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:43 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.975 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[ebe4dc95-25a1-439c-94f1-2466a37ac21c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:43 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.994 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[c73a4ca1-6c6f-44a3-989b-101de94f3337]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436792, 'reachable_time': 26001, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240826, 'error': None, 'target': 'ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:43 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.996 142272 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 23 21:13:43 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.997 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef5268f-0265-49da-80d2-c2a242cf5d25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:13:43 compute-1 systemd[1]: run-netns-ovnmeta\x2dfd64d126\x2dbc30\x2d4f96\x2d8737\x2d9a4b1cf2fe8a.mount: Deactivated successfully.
Nov 23 21:13:44 compute-1 nova_compute[230183]: 2025-11-23 21:13:44.299 230187 INFO nova.virt.libvirt.driver [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Deleting instance files /var/lib/nova/instances/c73efbfb-509e-4eb2-af63-a65ba0f98094_del
Nov 23 21:13:44 compute-1 nova_compute[230183]: 2025-11-23 21:13:44.300 230187 INFO nova.virt.libvirt.driver [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Deletion of /var/lib/nova/instances/c73efbfb-509e-4eb2-af63-a65ba0f98094_del complete
Nov 23 21:13:44 compute-1 nova_compute[230183]: 2025-11-23 21:13:44.358 230187 INFO nova.compute.manager [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Took 0.77 seconds to destroy the instance on the hypervisor.
Nov 23 21:13:44 compute-1 nova_compute[230183]: 2025-11-23 21:13:44.360 230187 DEBUG oslo.service.loopingcall [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 23 21:13:44 compute-1 nova_compute[230183]: 2025-11-23 21:13:44.361 230187 DEBUG nova.compute.manager [-] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 23 21:13:44 compute-1 nova_compute[230183]: 2025-11-23 21:13:44.361 230187 DEBUG nova.network.neutron [-] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 23 21:13:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.003000078s ======
Nov 23 21:13:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:44.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000078s
Nov 23 21:13:44 compute-1 ceph-mon[80135]: pgmap v969: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 23 21:13:45 compute-1 nova_compute[230183]: 2025-11-23 21:13:45.052 230187 DEBUG nova.network.neutron [req-78eceada-8483-4cb1-ada7-74c934ec2e8e req-6307c7ca-8b79-43ac-8d52-4b7e37ec93f1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Updated VIF entry in instance network info cache for port ba818b19-9f72-4242-b9d9-b1630b5d1f24. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 23 21:13:45 compute-1 nova_compute[230183]: 2025-11-23 21:13:45.054 230187 DEBUG nova.network.neutron [req-78eceada-8483-4cb1-ada7-74c934ec2e8e req-6307c7ca-8b79-43ac-8d52-4b7e37ec93f1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Updating instance_info_cache with network_info: [{"id": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "address": "fa:16:3e:0d:e6:fe", "network": {"id": "fd64d126-bc30-4f96-8737-9a4b1cf2fe8a", "bridge": "br-int", "label": "tempest-network-smoke--1300883220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba818b19-9f", "ovs_interfaceid": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:13:45 compute-1 nova_compute[230183]: 2025-11-23 21:13:45.072 230187 DEBUG oslo_concurrency.lockutils [req-78eceada-8483-4cb1-ada7-74c934ec2e8e req-6307c7ca-8b79-43ac-8d52-4b7e37ec93f1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-c73efbfb-509e-4eb2-af63-a65ba0f98094" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:13:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:45.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:45 compute-1 nova_compute[230183]: 2025-11-23 21:13:45.775 230187 DEBUG nova.network.neutron [-] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:13:45 compute-1 nova_compute[230183]: 2025-11-23 21:13:45.792 230187 DEBUG nova.compute.manager [req-5500e33c-6c6f-4554-9444-5c3bd306dd0c req-50977513-41d8-4bab-8890-af002409110f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Received event network-vif-unplugged-ba818b19-9f72-4242-b9d9-b1630b5d1f24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:13:45 compute-1 nova_compute[230183]: 2025-11-23 21:13:45.792 230187 DEBUG oslo_concurrency.lockutils [req-5500e33c-6c6f-4554-9444-5c3bd306dd0c req-50977513-41d8-4bab-8890-af002409110f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:13:45 compute-1 nova_compute[230183]: 2025-11-23 21:13:45.793 230187 DEBUG oslo_concurrency.lockutils [req-5500e33c-6c6f-4554-9444-5c3bd306dd0c req-50977513-41d8-4bab-8890-af002409110f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:13:45 compute-1 nova_compute[230183]: 2025-11-23 21:13:45.793 230187 DEBUG oslo_concurrency.lockutils [req-5500e33c-6c6f-4554-9444-5c3bd306dd0c req-50977513-41d8-4bab-8890-af002409110f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:13:45 compute-1 nova_compute[230183]: 2025-11-23 21:13:45.793 230187 DEBUG nova.compute.manager [req-5500e33c-6c6f-4554-9444-5c3bd306dd0c req-50977513-41d8-4bab-8890-af002409110f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] No waiting events found dispatching network-vif-unplugged-ba818b19-9f72-4242-b9d9-b1630b5d1f24 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:13:45 compute-1 nova_compute[230183]: 2025-11-23 21:13:45.793 230187 DEBUG nova.compute.manager [req-5500e33c-6c6f-4554-9444-5c3bd306dd0c req-50977513-41d8-4bab-8890-af002409110f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Received event network-vif-unplugged-ba818b19-9f72-4242-b9d9-b1630b5d1f24 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 23 21:13:45 compute-1 nova_compute[230183]: 2025-11-23 21:13:45.793 230187 DEBUG nova.compute.manager [req-5500e33c-6c6f-4554-9444-5c3bd306dd0c req-50977513-41d8-4bab-8890-af002409110f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Received event network-vif-plugged-ba818b19-9f72-4242-b9d9-b1630b5d1f24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:13:45 compute-1 nova_compute[230183]: 2025-11-23 21:13:45.794 230187 DEBUG oslo_concurrency.lockutils [req-5500e33c-6c6f-4554-9444-5c3bd306dd0c req-50977513-41d8-4bab-8890-af002409110f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:13:45 compute-1 nova_compute[230183]: 2025-11-23 21:13:45.794 230187 DEBUG oslo_concurrency.lockutils [req-5500e33c-6c6f-4554-9444-5c3bd306dd0c req-50977513-41d8-4bab-8890-af002409110f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:13:45 compute-1 nova_compute[230183]: 2025-11-23 21:13:45.794 230187 DEBUG oslo_concurrency.lockutils [req-5500e33c-6c6f-4554-9444-5c3bd306dd0c req-50977513-41d8-4bab-8890-af002409110f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:13:45 compute-1 nova_compute[230183]: 2025-11-23 21:13:45.794 230187 DEBUG nova.compute.manager [req-5500e33c-6c6f-4554-9444-5c3bd306dd0c req-50977513-41d8-4bab-8890-af002409110f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] No waiting events found dispatching network-vif-plugged-ba818b19-9f72-4242-b9d9-b1630b5d1f24 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:13:45 compute-1 nova_compute[230183]: 2025-11-23 21:13:45.794 230187 WARNING nova.compute.manager [req-5500e33c-6c6f-4554-9444-5c3bd306dd0c req-50977513-41d8-4bab-8890-af002409110f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Received unexpected event network-vif-plugged-ba818b19-9f72-4242-b9d9-b1630b5d1f24 for instance with vm_state active and task_state deleting.
Nov 23 21:13:45 compute-1 nova_compute[230183]: 2025-11-23 21:13:45.796 230187 INFO nova.compute.manager [-] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Took 1.44 seconds to deallocate network for instance.
Nov 23 21:13:45 compute-1 nova_compute[230183]: 2025-11-23 21:13:45.830 230187 DEBUG oslo_concurrency.lockutils [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:13:45 compute-1 nova_compute[230183]: 2025-11-23 21:13:45.830 230187 DEBUG oslo_concurrency.lockutils [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:13:45 compute-1 sudo[240829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:13:45 compute-1 sudo[240829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:13:45 compute-1 sudo[240829]: pam_unix(sudo:session): session closed for user root
Nov 23 21:13:45 compute-1 nova_compute[230183]: 2025-11-23 21:13:45.884 230187 DEBUG oslo_concurrency.processutils [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:13:46 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:13:46 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2090295414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:13:46 compute-1 nova_compute[230183]: 2025-11-23 21:13:46.331 230187 DEBUG oslo_concurrency.processutils [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:13:46 compute-1 nova_compute[230183]: 2025-11-23 21:13:46.341 230187 DEBUG nova.compute.provider_tree [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:13:46 compute-1 nova_compute[230183]: 2025-11-23 21:13:46.359 230187 DEBUG nova.scheduler.client.report [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:13:46 compute-1 nova_compute[230183]: 2025-11-23 21:13:46.381 230187 DEBUG oslo_concurrency.lockutils [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:13:46 compute-1 nova_compute[230183]: 2025-11-23 21:13:46.414 230187 INFO nova.scheduler.client.report [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Deleted allocations for instance c73efbfb-509e-4eb2-af63-a65ba0f98094
Nov 23 21:13:46 compute-1 nova_compute[230183]: 2025-11-23 21:13:46.481 230187 DEBUG oslo_concurrency.lockutils [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:13:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:13:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:46.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:13:46 compute-1 ceph-mon[80135]: pgmap v970: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Nov 23 21:13:46 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2090295414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:13:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:13:47 compute-1 nova_compute[230183]: 2025-11-23 21:13:47.332 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.002000052s ======
Nov 23 21:13:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:47.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Nov 23 21:13:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:48.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:48 compute-1 ceph-mon[80135]: pgmap v971: 337 pgs: 337 active+clean; 41 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 126 op/s
Nov 23 21:13:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:13:48 compute-1 nova_compute[230183]: 2025-11-23 21:13:48.852 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:49.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:13:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:50.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:13:50 compute-1 ceph-mon[80135]: pgmap v972: 337 pgs: 337 active+clean; 41 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 127 op/s
Nov 23 21:13:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:51.070 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:13:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:51.070 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:13:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:13:51.070 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:13:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:51.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:52 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:13:52 compute-1 nova_compute[230183]: 2025-11-23 21:13:52.333 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:13:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:52.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:13:52 compute-1 ceph-mon[80135]: pgmap v973: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 124 op/s
Nov 23 21:13:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:13:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:53.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:13:53 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3665490940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:13:53 compute-1 nova_compute[230183]: 2025-11-23 21:13:53.854 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:54.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:54 compute-1 ceph-mon[80135]: pgmap v974: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 96 op/s
Nov 23 21:13:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:55.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:55 compute-1 ceph-mon[80135]: pgmap v975: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Nov 23 21:13:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:13:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:56.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:13:56 compute-1 podman[240883]: 2025-11-23 21:13:56.665162873 +0000 UTC m=+0.070634116 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 23 21:13:56 compute-1 podman[240882]: 2025-11-23 21:13:56.675596811 +0000 UTC m=+0.088171634 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 23 21:13:56 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1598720131' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:13:56 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1065404581' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:13:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:13:57 compute-1 nova_compute[230183]: 2025-11-23 21:13:57.335 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:57.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:13:57 compute-1 ceph-mon[80135]: pgmap v976: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 21:13:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:13:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:58.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:13:58 compute-1 nova_compute[230183]: 2025-11-23 21:13:58.828 230187 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763932423.826772, c73efbfb-509e-4eb2-af63-a65ba0f98094 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:13:58 compute-1 nova_compute[230183]: 2025-11-23 21:13:58.828 230187 INFO nova.compute.manager [-] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] VM Stopped (Lifecycle Event)
Nov 23 21:13:58 compute-1 nova_compute[230183]: 2025-11-23 21:13:58.855 230187 DEBUG nova.compute.manager [None req-2234cdeb-2893-475f-834a-2b64cfecfdc7 - - - - - -] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:13:58 compute-1 nova_compute[230183]: 2025-11-23 21:13:58.855 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:13:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:13:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:13:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:59.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:00 compute-1 ceph-mon[80135]: pgmap v977: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 23 21:14:00 compute-1 sudo[240927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:14:00 compute-1 sudo[240927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:14:00 compute-1 sudo[240927]: pam_unix(sudo:session): session closed for user root
Nov 23 21:14:00 compute-1 sudo[240952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 21:14:00 compute-1 sudo[240952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:14:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:00.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:01 compute-1 sudo[240952]: pam_unix(sudo:session): session closed for user root
Nov 23 21:14:01 compute-1 nova_compute[230183]: 2025-11-23 21:14:01.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:14:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:14:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:01.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:14:01 compute-1 podman[241011]: 2025-11-23 21:14:01.660426629 +0000 UTC m=+0.078872375 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 23 21:14:01 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:14:01 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 21:14:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:14:02 compute-1 nova_compute[230183]: 2025-11-23 21:14:02.337 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:02.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:02 compute-1 ceph-mon[80135]: pgmap v978: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Nov 23 21:14:02 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:14:02 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:14:02 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 21:14:02 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 21:14:02 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:14:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:03.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:03 compute-1 nova_compute[230183]: 2025-11-23 21:14:03.858 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:14:03 compute-1 ceph-mon[80135]: pgmap v979: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Nov 23 21:14:04 compute-1 nova_compute[230183]: 2025-11-23 21:14:04.422 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:14:04 compute-1 nova_compute[230183]: 2025-11-23 21:14:04.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:14:04 compute-1 nova_compute[230183]: 2025-11-23 21:14:04.426 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 23 21:14:04 compute-1 nova_compute[230183]: 2025-11-23 21:14:04.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:14:04 compute-1 nova_compute[230183]: 2025-11-23 21:14:04.448 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:14:04 compute-1 nova_compute[230183]: 2025-11-23 21:14:04.448 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:14:04 compute-1 nova_compute[230183]: 2025-11-23 21:14:04.448 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:14:04 compute-1 nova_compute[230183]: 2025-11-23 21:14:04.449 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:14:04 compute-1 nova_compute[230183]: 2025-11-23 21:14:04.449 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:14:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:04.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:04 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:14:04 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1627076870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:14:04 compute-1 nova_compute[230183]: 2025-11-23 21:14:04.886 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:14:04 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1627076870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:14:05 compute-1 nova_compute[230183]: 2025-11-23 21:14:05.018 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:14:05 compute-1 nova_compute[230183]: 2025-11-23 21:14:05.019 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4923MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:14:05 compute-1 nova_compute[230183]: 2025-11-23 21:14:05.020 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:14:05 compute-1 nova_compute[230183]: 2025-11-23 21:14:05.020 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:14:05 compute-1 nova_compute[230183]: 2025-11-23 21:14:05.066 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:14:05 compute-1 nova_compute[230183]: 2025-11-23 21:14:05.066 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:14:05 compute-1 nova_compute[230183]: 2025-11-23 21:14:05.090 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:14:05 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:14:05 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3260849041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:14:05 compute-1 nova_compute[230183]: 2025-11-23 21:14:05.525 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:14:05 compute-1 nova_compute[230183]: 2025-11-23 21:14:05.529 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:14:05 compute-1 nova_compute[230183]: 2025-11-23 21:14:05.543 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:14:05 compute-1 nova_compute[230183]: 2025-11-23 21:14:05.558 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 23 21:14:05 compute-1 nova_compute[230183]: 2025-11-23 21:14:05.558 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.538s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:14:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:14:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:05.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:14:05 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/4100265904' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:14:05 compute-1 ceph-mon[80135]: pgmap v980: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Nov 23 21:14:05 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3260849041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:14:05 compute-1 sudo[241078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:14:05 compute-1 sudo[241078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:14:05 compute-1 sudo[241078]: pam_unix(sudo:session): session closed for user root
Nov 23 21:14:06 compute-1 nova_compute[230183]: 2025-11-23 21:14:06.559 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:14:06 compute-1 nova_compute[230183]: 2025-11-23 21:14:06.559 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 23 21:14:06 compute-1 nova_compute[230183]: 2025-11-23 21:14:06.559 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 23 21:14:06 compute-1 nova_compute[230183]: 2025-11-23 21:14:06.571 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 23 21:14:06 compute-1 nova_compute[230183]: 2025-11-23 21:14:06.572 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:14:06 compute-1 nova_compute[230183]: 2025-11-23 21:14:06.572 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:14:06 compute-1 nova_compute[230183]: 2025-11-23 21:14:06.572 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:14:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:06.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:06 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:06.753 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:14:06 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:06.753 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 23 21:14:06 compute-1 nova_compute[230183]: 2025-11-23 21:14:06.754 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:07 compute-1 sudo[241103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 21:14:07 compute-1 sudo[241103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:14:07 compute-1 sudo[241103]: pam_unix(sudo:session): session closed for user root
Nov 23 21:14:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:14:07 compute-1 nova_compute[230183]: 2025-11-23 21:14:07.337 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:07 compute-1 nova_compute[230183]: 2025-11-23 21:14:07.436 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:14:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:07.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:08 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:14:08 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:14:08 compute-1 ceph-mon[80135]: pgmap v981: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 23 21:14:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/2787009518' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:14:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/2787009518' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:14:08 compute-1 nova_compute[230183]: 2025-11-23 21:14:08.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:14:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:08.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:08 compute-1 nova_compute[230183]: 2025-11-23 21:14:08.860 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:09.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:10 compute-1 ceph-mon[80135]: pgmap v982: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 23 21:14:10 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2913995880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:14:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:10.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:10 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:10.756 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:14:10 compute-1 nova_compute[230183]: 2025-11-23 21:14:10.940 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:11 compute-1 nova_compute[230183]: 2025-11-23 21:14:11.015 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:11.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:11 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/376672714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:14:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:14:12 compute-1 nova_compute[230183]: 2025-11-23 21:14:12.372 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:14:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:12.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:14:12 compute-1 ceph-mon[80135]: pgmap v983: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 14 KiB/s wr, 91 op/s
Nov 23 21:14:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:13.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:13 compute-1 nova_compute[230183]: 2025-11-23 21:14:13.862 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:14.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:14 compute-1 ceph-mon[80135]: pgmap v984: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 321 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Nov 23 21:14:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:15.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:16.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:16 compute-1 ceph-mon[80135]: pgmap v985: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 321 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Nov 23 21:14:16 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1470766541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:14:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:14:17 compute-1 nova_compute[230183]: 2025-11-23 21:14:17.375 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:17.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:17 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2177938279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:14:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:14:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:18.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:14:18 compute-1 ceph-mon[80135]: pgmap v986: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:14:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:14:18 compute-1 nova_compute[230183]: 2025-11-23 21:14:18.873 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:19.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:14:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:20.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:14:20 compute-1 ceph-mon[80135]: pgmap v987: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:14:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:21.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:22 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:14:22 compute-1 nova_compute[230183]: 2025-11-23 21:14:22.432 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:22.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:22 compute-1 ceph-mon[80135]: pgmap v988: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:14:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:23.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:23 compute-1 nova_compute[230183]: 2025-11-23 21:14:23.875 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:23 compute-1 ceph-mon[80135]: pgmap v989: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:14:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:14:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:24.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:14:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:25.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:26 compute-1 sudo[241139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:14:26 compute-1 sudo[241139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:14:26 compute-1 sudo[241139]: pam_unix(sudo:session): session closed for user root
Nov 23 21:14:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:14:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:26.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:14:26 compute-1 ceph-mon[80135]: pgmap v990: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:14:27 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:14:27 compute-1 nova_compute[230183]: 2025-11-23 21:14:27.434 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:14:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:27.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:14:27 compute-1 podman[241165]: 2025-11-23 21:14:27.669146941 +0000 UTC m=+0.083889920 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 23 21:14:27 compute-1 podman[241166]: 2025-11-23 21:14:27.669924822 +0000 UTC m=+0.083904980 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 21:14:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:28.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:28 compute-1 ceph-mon[80135]: pgmap v991: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:14:28 compute-1 nova_compute[230183]: 2025-11-23 21:14:28.879 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:14:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:29.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:14:29 compute-1 nova_compute[230183]: 2025-11-23 21:14:29.612 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "c833a97e-dc45-489f-98e1-a2d33397836c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:14:29 compute-1 nova_compute[230183]: 2025-11-23 21:14:29.612 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:14:29 compute-1 nova_compute[230183]: 2025-11-23 21:14:29.627 230187 DEBUG nova.compute.manager [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 23 21:14:29 compute-1 nova_compute[230183]: 2025-11-23 21:14:29.713 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:14:29 compute-1 nova_compute[230183]: 2025-11-23 21:14:29.714 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:14:29 compute-1 nova_compute[230183]: 2025-11-23 21:14:29.723 230187 DEBUG nova.virt.hardware [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 23 21:14:29 compute-1 nova_compute[230183]: 2025-11-23 21:14:29.723 230187 INFO nova.compute.claims [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Claim successful on node compute-1.ctlplane.example.com
Nov 23 21:14:29 compute-1 nova_compute[230183]: 2025-11-23 21:14:29.823 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:14:30 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:14:30 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/128319179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:14:30 compute-1 nova_compute[230183]: 2025-11-23 21:14:30.285 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:14:30 compute-1 nova_compute[230183]: 2025-11-23 21:14:30.291 230187 DEBUG nova.compute.provider_tree [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:14:30 compute-1 nova_compute[230183]: 2025-11-23 21:14:30.314 230187 DEBUG nova.scheduler.client.report [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:14:30 compute-1 nova_compute[230183]: 2025-11-23 21:14:30.344 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:14:30 compute-1 nova_compute[230183]: 2025-11-23 21:14:30.345 230187 DEBUG nova.compute.manager [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 23 21:14:30 compute-1 nova_compute[230183]: 2025-11-23 21:14:30.405 230187 DEBUG nova.compute.manager [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 23 21:14:30 compute-1 nova_compute[230183]: 2025-11-23 21:14:30.405 230187 DEBUG nova.network.neutron [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 23 21:14:30 compute-1 nova_compute[230183]: 2025-11-23 21:14:30.430 230187 INFO nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 23 21:14:30 compute-1 nova_compute[230183]: 2025-11-23 21:14:30.448 230187 DEBUG nova.compute.manager [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 23 21:14:30 compute-1 nova_compute[230183]: 2025-11-23 21:14:30.567 230187 DEBUG nova.compute.manager [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 23 21:14:30 compute-1 nova_compute[230183]: 2025-11-23 21:14:30.568 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 23 21:14:30 compute-1 nova_compute[230183]: 2025-11-23 21:14:30.568 230187 INFO nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Creating image(s)
Nov 23 21:14:30 compute-1 nova_compute[230183]: 2025-11-23 21:14:30.586 230187 DEBUG nova.storage.rbd_utils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c833a97e-dc45-489f-98e1-a2d33397836c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:14:30 compute-1 nova_compute[230183]: 2025-11-23 21:14:30.607 230187 DEBUG nova.storage.rbd_utils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c833a97e-dc45-489f-98e1-a2d33397836c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:14:30 compute-1 nova_compute[230183]: 2025-11-23 21:14:30.626 230187 DEBUG nova.storage.rbd_utils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c833a97e-dc45-489f-98e1-a2d33397836c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:14:30 compute-1 nova_compute[230183]: 2025-11-23 21:14:30.629 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:14:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:30.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:30 compute-1 nova_compute[230183]: 2025-11-23 21:14:30.695 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:14:30 compute-1 nova_compute[230183]: 2025-11-23 21:14:30.696 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:14:30 compute-1 nova_compute[230183]: 2025-11-23 21:14:30.697 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:14:30 compute-1 nova_compute[230183]: 2025-11-23 21:14:30.697 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:14:30 compute-1 ceph-mon[80135]: pgmap v992: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:14:30 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/128319179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:14:30 compute-1 nova_compute[230183]: 2025-11-23 21:14:30.717 230187 DEBUG nova.storage.rbd_utils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c833a97e-dc45-489f-98e1-a2d33397836c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:14:30 compute-1 nova_compute[230183]: 2025-11-23 21:14:30.720 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 c833a97e-dc45-489f-98e1-a2d33397836c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:14:30 compute-1 nova_compute[230183]: 2025-11-23 21:14:30.996 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 c833a97e-dc45-489f-98e1-a2d33397836c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:14:31 compute-1 nova_compute[230183]: 2025-11-23 21:14:31.072 230187 DEBUG nova.storage.rbd_utils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] resizing rbd image c833a97e-dc45-489f-98e1-a2d33397836c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 23 21:14:31 compute-1 nova_compute[230183]: 2025-11-23 21:14:31.184 230187 DEBUG nova.objects.instance [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'migration_context' on Instance uuid c833a97e-dc45-489f-98e1-a2d33397836c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:14:31 compute-1 nova_compute[230183]: 2025-11-23 21:14:31.200 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 23 21:14:31 compute-1 nova_compute[230183]: 2025-11-23 21:14:31.201 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Ensure instance console log exists: /var/lib/nova/instances/c833a97e-dc45-489f-98e1-a2d33397836c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 23 21:14:31 compute-1 nova_compute[230183]: 2025-11-23 21:14:31.202 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:14:31 compute-1 nova_compute[230183]: 2025-11-23 21:14:31.202 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:14:31 compute-1 nova_compute[230183]: 2025-11-23 21:14:31.202 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:14:31 compute-1 nova_compute[230183]: 2025-11-23 21:14:31.346 230187 DEBUG nova.policy [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9fb5352c62684f2ba3a326a953a10dfe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '782593db60784ab8bff41fe87d72ff5f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 23 21:14:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:31.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:32 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:14:32 compute-1 nova_compute[230183]: 2025-11-23 21:14:32.436 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:32 compute-1 podman[241399]: 2025-11-23 21:14:32.626742395 +0000 UTC m=+0.047573510 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Nov 23 21:14:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:32.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:32 compute-1 ceph-mon[80135]: pgmap v993: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:14:32 compute-1 nova_compute[230183]: 2025-11-23 21:14:32.975 230187 DEBUG nova.network.neutron [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Successfully created port: b71755c1-8148-40c0-884d-aad83ae8602a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 23 21:14:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:33.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:14:33 compute-1 nova_compute[230183]: 2025-11-23 21:14:33.882 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:34 compute-1 nova_compute[230183]: 2025-11-23 21:14:34.070 230187 DEBUG nova.network.neutron [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Successfully updated port: b71755c1-8148-40c0-884d-aad83ae8602a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 23 21:14:34 compute-1 nova_compute[230183]: 2025-11-23 21:14:34.087 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "refresh_cache-c833a97e-dc45-489f-98e1-a2d33397836c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:14:34 compute-1 nova_compute[230183]: 2025-11-23 21:14:34.087 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquired lock "refresh_cache-c833a97e-dc45-489f-98e1-a2d33397836c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:14:34 compute-1 nova_compute[230183]: 2025-11-23 21:14:34.087 230187 DEBUG nova.network.neutron [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 23 21:14:34 compute-1 nova_compute[230183]: 2025-11-23 21:14:34.161 230187 DEBUG nova.compute.manager [req-79bc167b-e41f-4be0-b9c5-20b19be51c88 req-892a5d5b-35c7-4c68-80b6-40fa7eba739a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Received event network-changed-b71755c1-8148-40c0-884d-aad83ae8602a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:14:34 compute-1 nova_compute[230183]: 2025-11-23 21:14:34.161 230187 DEBUG nova.compute.manager [req-79bc167b-e41f-4be0-b9c5-20b19be51c88 req-892a5d5b-35c7-4c68-80b6-40fa7eba739a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Refreshing instance network info cache due to event network-changed-b71755c1-8148-40c0-884d-aad83ae8602a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 23 21:14:34 compute-1 nova_compute[230183]: 2025-11-23 21:14:34.161 230187 DEBUG oslo_concurrency.lockutils [req-79bc167b-e41f-4be0-b9c5-20b19be51c88 req-892a5d5b-35c7-4c68-80b6-40fa7eba739a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-c833a97e-dc45-489f-98e1-a2d33397836c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:14:34 compute-1 nova_compute[230183]: 2025-11-23 21:14:34.216 230187 DEBUG nova.network.neutron [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 23 21:14:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:34.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:34 compute-1 ceph-mon[80135]: pgmap v994: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:14:35 compute-1 nova_compute[230183]: 2025-11-23 21:14:35.528 230187 DEBUG nova.network.neutron [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Updating instance_info_cache with network_info: [{"id": "b71755c1-8148-40c0-884d-aad83ae8602a", "address": "fa:16:3e:2a:70:ad", "network": {"id": "33439544-e5f9-4500-9a9c-dbc1c4cd858c", "bridge": "br-int", "label": "tempest-network-smoke--1819281856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb71755c1-81", "ovs_interfaceid": "b71755c1-8148-40c0-884d-aad83ae8602a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:14:35 compute-1 nova_compute[230183]: 2025-11-23 21:14:35.548 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Releasing lock "refresh_cache-c833a97e-dc45-489f-98e1-a2d33397836c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:14:35 compute-1 nova_compute[230183]: 2025-11-23 21:14:35.549 230187 DEBUG nova.compute.manager [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Instance network_info: |[{"id": "b71755c1-8148-40c0-884d-aad83ae8602a", "address": "fa:16:3e:2a:70:ad", "network": {"id": "33439544-e5f9-4500-9a9c-dbc1c4cd858c", "bridge": "br-int", "label": "tempest-network-smoke--1819281856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb71755c1-81", "ovs_interfaceid": "b71755c1-8148-40c0-884d-aad83ae8602a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 23 21:14:35 compute-1 nova_compute[230183]: 2025-11-23 21:14:35.550 230187 DEBUG oslo_concurrency.lockutils [req-79bc167b-e41f-4be0-b9c5-20b19be51c88 req-892a5d5b-35c7-4c68-80b6-40fa7eba739a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-c833a97e-dc45-489f-98e1-a2d33397836c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:14:35 compute-1 nova_compute[230183]: 2025-11-23 21:14:35.550 230187 DEBUG nova.network.neutron [req-79bc167b-e41f-4be0-b9c5-20b19be51c88 req-892a5d5b-35c7-4c68-80b6-40fa7eba739a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Refreshing network info cache for port b71755c1-8148-40c0-884d-aad83ae8602a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 23 21:14:35 compute-1 nova_compute[230183]: 2025-11-23 21:14:35.553 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Start _get_guest_xml network_info=[{"id": "b71755c1-8148-40c0-884d-aad83ae8602a", "address": "fa:16:3e:2a:70:ad", "network": {"id": "33439544-e5f9-4500-9a9c-dbc1c4cd858c", "bridge": "br-int", "label": "tempest-network-smoke--1819281856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb71755c1-81", "ovs_interfaceid": "b71755c1-8148-40c0-884d-aad83ae8602a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'guest_format': None, 'encrypted': False, 'size': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': '3c45fa6c-8a99-4359-a34e-d89f4e1e77d0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 23 21:14:35 compute-1 nova_compute[230183]: 2025-11-23 21:14:35.558 230187 WARNING nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:14:35 compute-1 nova_compute[230183]: 2025-11-23 21:14:35.570 230187 DEBUG nova.virt.libvirt.host [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 23 21:14:35 compute-1 nova_compute[230183]: 2025-11-23 21:14:35.571 230187 DEBUG nova.virt.libvirt.host [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 23 21:14:35 compute-1 nova_compute[230183]: 2025-11-23 21:14:35.577 230187 DEBUG nova.virt.libvirt.host [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 23 21:14:35 compute-1 nova_compute[230183]: 2025-11-23 21:14:35.578 230187 DEBUG nova.virt.libvirt.host [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 23 21:14:35 compute-1 nova_compute[230183]: 2025-11-23 21:14:35.578 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 23 21:14:35 compute-1 nova_compute[230183]: 2025-11-23 21:14:35.579 230187 DEBUG nova.virt.hardware [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-23T21:05:42Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='56044b93-2979-48aa-b67f-c37e1b489306',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 23 21:14:35 compute-1 nova_compute[230183]: 2025-11-23 21:14:35.579 230187 DEBUG nova.virt.hardware [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 23 21:14:35 compute-1 nova_compute[230183]: 2025-11-23 21:14:35.580 230187 DEBUG nova.virt.hardware [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 23 21:14:35 compute-1 nova_compute[230183]: 2025-11-23 21:14:35.580 230187 DEBUG nova.virt.hardware [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 23 21:14:35 compute-1 nova_compute[230183]: 2025-11-23 21:14:35.580 230187 DEBUG nova.virt.hardware [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 23 21:14:35 compute-1 nova_compute[230183]: 2025-11-23 21:14:35.580 230187 DEBUG nova.virt.hardware [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 23 21:14:35 compute-1 nova_compute[230183]: 2025-11-23 21:14:35.581 230187 DEBUG nova.virt.hardware [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 23 21:14:35 compute-1 nova_compute[230183]: 2025-11-23 21:14:35.581 230187 DEBUG nova.virt.hardware [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 23 21:14:35 compute-1 nova_compute[230183]: 2025-11-23 21:14:35.581 230187 DEBUG nova.virt.hardware [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 23 21:14:35 compute-1 nova_compute[230183]: 2025-11-23 21:14:35.581 230187 DEBUG nova.virt.hardware [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 23 21:14:35 compute-1 nova_compute[230183]: 2025-11-23 21:14:35.582 230187 DEBUG nova.virt.hardware [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 23 21:14:35 compute-1 nova_compute[230183]: 2025-11-23 21:14:35.585 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:14:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:35.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:35 compute-1 ceph-mon[80135]: pgmap v995: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 21:14:35 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 21:14:35 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3264451218' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.013 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.035 230187 DEBUG nova.storage.rbd_utils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c833a97e-dc45-489f-98e1-a2d33397836c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.039 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:14:36 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 21:14:36 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/440052661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.465 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.466 230187 DEBUG nova.virt.libvirt.vif [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-582512634',display_name='tempest-TestNetworkBasicOps-server-582512634',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-582512634',id=10,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCQNYwhSj43+WrihAMXXrcf3ffitbakZUmwhOuijrPcqM40TmVQc3wMfjU/cZyNHNeBKw0TKec9vXExOxmFIsncMN4D0yIYffuxIytj1M98N5vK6pCD4pL97G7XeskRufg==',key_name='tempest-TestNetworkBasicOps-1166238543',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-2k77ohtd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:14:30Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=c833a97e-dc45-489f-98e1-a2d33397836c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b71755c1-8148-40c0-884d-aad83ae8602a", "address": "fa:16:3e:2a:70:ad", "network": {"id": "33439544-e5f9-4500-9a9c-dbc1c4cd858c", "bridge": "br-int", "label": "tempest-network-smoke--1819281856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb71755c1-81", "ovs_interfaceid": "b71755c1-8148-40c0-884d-aad83ae8602a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.467 230187 DEBUG nova.network.os_vif_util [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "b71755c1-8148-40c0-884d-aad83ae8602a", "address": "fa:16:3e:2a:70:ad", "network": {"id": "33439544-e5f9-4500-9a9c-dbc1c4cd858c", "bridge": "br-int", "label": "tempest-network-smoke--1819281856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb71755c1-81", "ovs_interfaceid": "b71755c1-8148-40c0-884d-aad83ae8602a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.468 230187 DEBUG nova.network.os_vif_util [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:70:ad,bridge_name='br-int',has_traffic_filtering=True,id=b71755c1-8148-40c0-884d-aad83ae8602a,network=Network(33439544-e5f9-4500-9a9c-dbc1c4cd858c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb71755c1-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.469 230187 DEBUG nova.objects.instance [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'pci_devices' on Instance uuid c833a97e-dc45-489f-98e1-a2d33397836c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.490 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] End _get_guest_xml xml=<domain type="kvm">
Nov 23 21:14:36 compute-1 nova_compute[230183]:   <uuid>c833a97e-dc45-489f-98e1-a2d33397836c</uuid>
Nov 23 21:14:36 compute-1 nova_compute[230183]:   <name>instance-0000000a</name>
Nov 23 21:14:36 compute-1 nova_compute[230183]:   <memory>131072</memory>
Nov 23 21:14:36 compute-1 nova_compute[230183]:   <vcpu>1</vcpu>
Nov 23 21:14:36 compute-1 nova_compute[230183]:   <metadata>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <nova:name>tempest-TestNetworkBasicOps-server-582512634</nova:name>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <nova:creationTime>2025-11-23 21:14:35</nova:creationTime>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <nova:flavor name="m1.nano">
Nov 23 21:14:36 compute-1 nova_compute[230183]:         <nova:memory>128</nova:memory>
Nov 23 21:14:36 compute-1 nova_compute[230183]:         <nova:disk>1</nova:disk>
Nov 23 21:14:36 compute-1 nova_compute[230183]:         <nova:swap>0</nova:swap>
Nov 23 21:14:36 compute-1 nova_compute[230183]:         <nova:ephemeral>0</nova:ephemeral>
Nov 23 21:14:36 compute-1 nova_compute[230183]:         <nova:vcpus>1</nova:vcpus>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       </nova:flavor>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <nova:owner>
Nov 23 21:14:36 compute-1 nova_compute[230183]:         <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 21:14:36 compute-1 nova_compute[230183]:         <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       </nova:owner>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <nova:ports>
Nov 23 21:14:36 compute-1 nova_compute[230183]:         <nova:port uuid="b71755c1-8148-40c0-884d-aad83ae8602a">
Nov 23 21:14:36 compute-1 nova_compute[230183]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:         </nova:port>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       </nova:ports>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     </nova:instance>
Nov 23 21:14:36 compute-1 nova_compute[230183]:   </metadata>
Nov 23 21:14:36 compute-1 nova_compute[230183]:   <sysinfo type="smbios">
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <system>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <entry name="manufacturer">RDO</entry>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <entry name="product">OpenStack Compute</entry>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <entry name="serial">c833a97e-dc45-489f-98e1-a2d33397836c</entry>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <entry name="uuid">c833a97e-dc45-489f-98e1-a2d33397836c</entry>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <entry name="family">Virtual Machine</entry>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     </system>
Nov 23 21:14:36 compute-1 nova_compute[230183]:   </sysinfo>
Nov 23 21:14:36 compute-1 nova_compute[230183]:   <os>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <boot dev="hd"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <smbios mode="sysinfo"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:   </os>
Nov 23 21:14:36 compute-1 nova_compute[230183]:   <features>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <acpi/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <apic/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <vmcoreinfo/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:   </features>
Nov 23 21:14:36 compute-1 nova_compute[230183]:   <clock offset="utc">
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <timer name="pit" tickpolicy="delay"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <timer name="hpet" present="no"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:   </clock>
Nov 23 21:14:36 compute-1 nova_compute[230183]:   <cpu mode="host-model" match="exact">
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <topology sockets="1" cores="1" threads="1"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:   </cpu>
Nov 23 21:14:36 compute-1 nova_compute[230183]:   <devices>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <disk type="network" device="disk">
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <driver type="raw" cache="none"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <source protocol="rbd" name="vms/c833a97e-dc45-489f-98e1-a2d33397836c_disk">
Nov 23 21:14:36 compute-1 nova_compute[230183]:         <host name="192.168.122.100" port="6789"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:         <host name="192.168.122.102" port="6789"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:         <host name="192.168.122.101" port="6789"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       </source>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <auth username="openstack">
Nov 23 21:14:36 compute-1 nova_compute[230183]:         <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <target dev="vda" bus="virtio"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <disk type="network" device="cdrom">
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <driver type="raw" cache="none"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <source protocol="rbd" name="vms/c833a97e-dc45-489f-98e1-a2d33397836c_disk.config">
Nov 23 21:14:36 compute-1 nova_compute[230183]:         <host name="192.168.122.100" port="6789"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:         <host name="192.168.122.102" port="6789"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:         <host name="192.168.122.101" port="6789"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       </source>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <auth username="openstack">
Nov 23 21:14:36 compute-1 nova_compute[230183]:         <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <target dev="sda" bus="sata"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <interface type="ethernet">
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <mac address="fa:16:3e:2a:70:ad"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <model type="virtio"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <driver name="vhost" rx_queue_size="512"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <mtu size="1442"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <target dev="tapb71755c1-81"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     </interface>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <serial type="pty">
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <log file="/var/lib/nova/instances/c833a97e-dc45-489f-98e1-a2d33397836c/console.log" append="off"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     </serial>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <video>
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <model type="virtio"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     </video>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <input type="tablet" bus="usb"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <rng model="virtio">
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <backend model="random">/dev/urandom</backend>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     </rng>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <controller type="usb" index="0"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     <memballoon model="virtio">
Nov 23 21:14:36 compute-1 nova_compute[230183]:       <stats period="10"/>
Nov 23 21:14:36 compute-1 nova_compute[230183]:     </memballoon>
Nov 23 21:14:36 compute-1 nova_compute[230183]:   </devices>
Nov 23 21:14:36 compute-1 nova_compute[230183]: </domain>
Nov 23 21:14:36 compute-1 nova_compute[230183]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.491 230187 DEBUG nova.compute.manager [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Preparing to wait for external event network-vif-plugged-b71755c1-8148-40c0-884d-aad83ae8602a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.492 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.492 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.492 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.493 230187 DEBUG nova.virt.libvirt.vif [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-582512634',display_name='tempest-TestNetworkBasicOps-server-582512634',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-582512634',id=10,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCQNYwhSj43+WrihAMXXrcf3ffitbakZUmwhOuijrPcqM40TmVQc3wMfjU/cZyNHNeBKw0TKec9vXExOxmFIsncMN4D0yIYffuxIytj1M98N5vK6pCD4pL97G7XeskRufg==',key_name='tempest-TestNetworkBasicOps-1166238543',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-2k77ohtd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:14:30Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=c833a97e-dc45-489f-98e1-a2d33397836c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b71755c1-8148-40c0-884d-aad83ae8602a", "address": "fa:16:3e:2a:70:ad", "network": {"id": "33439544-e5f9-4500-9a9c-dbc1c4cd858c", "bridge": "br-int", "label": "tempest-network-smoke--1819281856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb71755c1-81", "ovs_interfaceid": "b71755c1-8148-40c0-884d-aad83ae8602a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.494 230187 DEBUG nova.network.os_vif_util [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "b71755c1-8148-40c0-884d-aad83ae8602a", "address": "fa:16:3e:2a:70:ad", "network": {"id": "33439544-e5f9-4500-9a9c-dbc1c4cd858c", "bridge": "br-int", "label": "tempest-network-smoke--1819281856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb71755c1-81", "ovs_interfaceid": "b71755c1-8148-40c0-884d-aad83ae8602a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.494 230187 DEBUG nova.network.os_vif_util [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:70:ad,bridge_name='br-int',has_traffic_filtering=True,id=b71755c1-8148-40c0-884d-aad83ae8602a,network=Network(33439544-e5f9-4500-9a9c-dbc1c4cd858c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb71755c1-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.495 230187 DEBUG os_vif [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:70:ad,bridge_name='br-int',has_traffic_filtering=True,id=b71755c1-8148-40c0-884d-aad83ae8602a,network=Network(33439544-e5f9-4500-9a9c-dbc1c4cd858c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb71755c1-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.495 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.496 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.496 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.499 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.500 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb71755c1-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.500 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb71755c1-81, col_values=(('external_ids', {'iface-id': 'b71755c1-8148-40c0-884d-aad83ae8602a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2a:70:ad', 'vm-uuid': 'c833a97e-dc45-489f-98e1-a2d33397836c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.502 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:36 compute-1 NetworkManager[49021]: <info>  [1763932476.5032] manager: (tapb71755c1-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.504 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.509 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.510 230187 INFO os_vif [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:70:ad,bridge_name='br-int',has_traffic_filtering=True,id=b71755c1-8148-40c0-884d-aad83ae8602a,network=Network(33439544-e5f9-4500-9a9c-dbc1c4cd858c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb71755c1-81')
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.548 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.548 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.548 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No VIF found with MAC fa:16:3e:2a:70:ad, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.549 230187 INFO nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Using config drive
Nov 23 21:14:36 compute-1 nova_compute[230183]: 2025-11-23 21:14:36.570 230187 DEBUG nova.storage.rbd_utils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c833a97e-dc45-489f-98e1-a2d33397836c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:14:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:14:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:36.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:14:37 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3264451218' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:14:37 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/440052661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:14:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:14:37 compute-1 nova_compute[230183]: 2025-11-23 21:14:37.576 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:37 compute-1 nova_compute[230183]: 2025-11-23 21:14:37.583 230187 INFO nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Creating config drive at /var/lib/nova/instances/c833a97e-dc45-489f-98e1-a2d33397836c/disk.config
Nov 23 21:14:37 compute-1 nova_compute[230183]: 2025-11-23 21:14:37.588 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c833a97e-dc45-489f-98e1-a2d33397836c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprjumm1zo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:14:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:37.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:37 compute-1 nova_compute[230183]: 2025-11-23 21:14:37.731 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c833a97e-dc45-489f-98e1-a2d33397836c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprjumm1zo" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:14:37 compute-1 nova_compute[230183]: 2025-11-23 21:14:37.765 230187 DEBUG nova.storage.rbd_utils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c833a97e-dc45-489f-98e1-a2d33397836c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:14:37 compute-1 nova_compute[230183]: 2025-11-23 21:14:37.769 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c833a97e-dc45-489f-98e1-a2d33397836c/disk.config c833a97e-dc45-489f-98e1-a2d33397836c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:14:37 compute-1 nova_compute[230183]: 2025-11-23 21:14:37.940 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c833a97e-dc45-489f-98e1-a2d33397836c/disk.config c833a97e-dc45-489f-98e1-a2d33397836c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:14:37 compute-1 nova_compute[230183]: 2025-11-23 21:14:37.942 230187 INFO nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Deleting local config drive /var/lib/nova/instances/c833a97e-dc45-489f-98e1-a2d33397836c/disk.config because it was imported into RBD.
Nov 23 21:14:37 compute-1 kernel: tapb71755c1-81: entered promiscuous mode
Nov 23 21:14:37 compute-1 NetworkManager[49021]: <info>  [1763932477.9987] manager: (tapb71755c1-81): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Nov 23 21:14:37 compute-1 ovn_controller[132845]: 2025-11-23T21:14:37Z|00104|binding|INFO|Claiming lport b71755c1-8148-40c0-884d-aad83ae8602a for this chassis.
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:37.999 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:38 compute-1 ovn_controller[132845]: 2025-11-23T21:14:38Z|00105|binding|INFO|b71755c1-8148-40c0-884d-aad83ae8602a: Claiming fa:16:3e:2a:70:ad 10.100.0.10
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.006 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.021 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:70:ad 10.100.0.10'], port_security=['fa:16:3e:2a:70:ad 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c833a97e-dc45-489f-98e1-a2d33397836c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33439544-e5f9-4500-9a9c-dbc1c4cd858c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '01afc80e-05e3-4e44-a9a7-ca2439f76ab4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=94ce2d65-f870-4d9e-a5f2-e431f68e3936, chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=b71755c1-8148-40c0-884d-aad83ae8602a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.023 142158 INFO neutron.agent.ovn.metadata.agent [-] Port b71755c1-8148-40c0-884d-aad83ae8602a in datapath 33439544-e5f9-4500-9a9c-dbc1c4cd858c bound to our chassis
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.025 142158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 33439544-e5f9-4500-9a9c-dbc1c4cd858c
Nov 23 21:14:38 compute-1 systemd-udevd[241555]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 21:14:38 compute-1 NetworkManager[49021]: <info>  [1763932478.0395] device (tapb71755c1-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 21:14:38 compute-1 NetworkManager[49021]: <info>  [1763932478.0403] device (tapb71755c1-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.042 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[ae39b72b-a1ce-4b7b-992b-b83a9b165b0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.045 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap33439544-e1 in ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 23 21:14:38 compute-1 systemd-machined[193469]: New machine qemu-6-instance-0000000a.
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.047 233901 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap33439544-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.047 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[3ab0238c-ba55-4f6f-8691-49cd850e594f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.049 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[b3cea3a1-a173-4282-ae0c-bedf9552293a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.063 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[0c24c3cc-634f-4cbf-8226-533d8c398504]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.089 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[79f52c09-6b56-4a7f-86f7-1cb53ab2091c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:14:38 compute-1 systemd[1]: Started Virtual Machine qemu-6-instance-0000000a.
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.098 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:38 compute-1 ovn_controller[132845]: 2025-11-23T21:14:38Z|00106|binding|INFO|Setting lport b71755c1-8148-40c0-884d-aad83ae8602a ovn-installed in OVS
Nov 23 21:14:38 compute-1 ovn_controller[132845]: 2025-11-23T21:14:38Z|00107|binding|INFO|Setting lport b71755c1-8148-40c0-884d-aad83ae8602a up in Southbound
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.107 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.116 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[f54bf081-271e-4364-9b9c-d865becf9add]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.120 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[0700fbf1-0a1b-45d2-b97f-ea6ccd2df5f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:14:38 compute-1 ceph-mon[80135]: pgmap v996: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 21:14:38 compute-1 NetworkManager[49021]: <info>  [1763932478.1253] manager: (tap33439544-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/67)
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.154 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[cf809797-c6db-4855-bc63-50e0e9663a1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.157 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[c1d330d7-442e-4029-9753-10a328293308]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:14:38 compute-1 NetworkManager[49021]: <info>  [1763932478.1799] device (tap33439544-e0): carrier: link connected
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.187 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[76344240-17e5-468a-b712-46bc56c1937c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.205 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[62e2e13e-ec90-4ed1-a3b6-df9603a568d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap33439544-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:1d:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442675, 'reachable_time': 25570, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 241591, 'error': None, 'target': 'ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.221 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[3d308a2c-685d-4764-944a-d43c6697b56f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feef:1d6f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442675, 'tstamp': 442675}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241592, 'error': None, 'target': 'ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.239 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[698cbdb4-6464-450c-a6f3-cc9b4656bfa7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap33439544-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:1d:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442675, 'reachable_time': 25570, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 241593, 'error': None, 'target': 'ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.267 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[4bf18e2b-8afa-46df-90de-b57b91901771]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.338 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[cf53605a-e6d4-449a-867e-37f0265e8cf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.339 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33439544-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.340 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.340 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33439544-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.342 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:38 compute-1 NetworkManager[49021]: <info>  [1763932478.3430] manager: (tap33439544-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Nov 23 21:14:38 compute-1 kernel: tap33439544-e0: entered promiscuous mode
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.346 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap33439544-e0, col_values=(('external_ids', {'iface-id': '3160bfc6-c855-4bc2-a26d-97781eac404c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:14:38 compute-1 ovn_controller[132845]: 2025-11-23T21:14:38Z|00108|binding|INFO|Releasing lport 3160bfc6-c855-4bc2-a26d-97781eac404c from this chassis (sb_readonly=0)
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.352 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.355 230187 DEBUG nova.network.neutron [req-79bc167b-e41f-4be0-b9c5-20b19be51c88 req-892a5d5b-35c7-4c68-80b6-40fa7eba739a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Updated VIF entry in instance network info cache for port b71755c1-8148-40c0-884d-aad83ae8602a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.355 230187 DEBUG nova.network.neutron [req-79bc167b-e41f-4be0-b9c5-20b19be51c88 req-892a5d5b-35c7-4c68-80b6-40fa7eba739a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Updating instance_info_cache with network_info: [{"id": "b71755c1-8148-40c0-884d-aad83ae8602a", "address": "fa:16:3e:2a:70:ad", "network": {"id": "33439544-e5f9-4500-9a9c-dbc1c4cd858c", "bridge": "br-int", "label": "tempest-network-smoke--1819281856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb71755c1-81", "ovs_interfaceid": "b71755c1-8148-40c0-884d-aad83ae8602a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.366 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.367 142158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/33439544-e5f9-4500-9a9c-dbc1c4cd858c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/33439544-e5f9-4500-9a9c-dbc1c4cd858c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.367 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[b536def7-e62d-4c8c-a584-5d554a784cf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.368 142158 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: global
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]:     log         /dev/log local0 debug
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]:     log-tag     haproxy-metadata-proxy-33439544-e5f9-4500-9a9c-dbc1c4cd858c
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]:     user        root
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]:     group       root
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]:     maxconn     1024
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]:     pidfile     /var/lib/neutron/external/pids/33439544-e5f9-4500-9a9c-dbc1c4cd858c.pid.haproxy
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]:     daemon
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: defaults
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]:     log global
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]:     mode http
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]:     option httplog
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]:     option dontlognull
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]:     option http-server-close
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]:     option forwardfor
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]:     retries                 3
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]:     timeout http-request    30s
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]:     timeout connect         30s
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]:     timeout client          32s
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]:     timeout server          32s
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]:     timeout http-keep-alive 30s
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: listen listener
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]:     bind 169.254.169.254:80
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]:     server metadata /var/lib/neutron/metadata_proxy
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]:     http-request add-header X-OVN-Network-ID 33439544-e5f9-4500-9a9c-dbc1c4cd858c
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 23 21:14:38 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.370 142158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c', 'env', 'PROCESS_TAG=haproxy-33439544-e5f9-4500-9a9c-dbc1c4cd858c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/33439544-e5f9-4500-9a9c-dbc1c4cd858c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.373 230187 DEBUG oslo_concurrency.lockutils [req-79bc167b-e41f-4be0-b9c5-20b19be51c88 req-892a5d5b-35c7-4c68-80b6-40fa7eba739a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-c833a97e-dc45-489f-98e1-a2d33397836c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.584 230187 DEBUG nova.compute.manager [req-392ec6ea-38a4-40d6-955c-d42d5d957a3c req-fea96aec-231b-4cc4-ac33-bbd00cffd22d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Received event network-vif-plugged-b71755c1-8148-40c0-884d-aad83ae8602a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.585 230187 DEBUG oslo_concurrency.lockutils [req-392ec6ea-38a4-40d6-955c-d42d5d957a3c req-fea96aec-231b-4cc4-ac33-bbd00cffd22d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.585 230187 DEBUG oslo_concurrency.lockutils [req-392ec6ea-38a4-40d6-955c-d42d5d957a3c req-fea96aec-231b-4cc4-ac33-bbd00cffd22d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.585 230187 DEBUG oslo_concurrency.lockutils [req-392ec6ea-38a4-40d6-955c-d42d5d957a3c req-fea96aec-231b-4cc4-ac33-bbd00cffd22d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.586 230187 DEBUG nova.compute.manager [req-392ec6ea-38a4-40d6-955c-d42d5d957a3c req-fea96aec-231b-4cc4-ac33-bbd00cffd22d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Processing event network-vif-plugged-b71755c1-8148-40c0-884d-aad83ae8602a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 23 21:14:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:14:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:38.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.707 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932478.7071743, c833a97e-dc45-489f-98e1-a2d33397836c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.708 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] VM Started (Lifecycle Event)
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.710 230187 DEBUG nova.compute.manager [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.712 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.715 230187 INFO nova.virt.libvirt.driver [-] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Instance spawned successfully.
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.715 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.755 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.759 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.759 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.760 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.760 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.760 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.761 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:14:38 compute-1 podman[241667]: 2025-11-23 21:14:38.761530395 +0000 UTC m=+0.048206297 container create e67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.765 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 23 21:14:38 compute-1 systemd[1]: Started libpod-conmon-e67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53.scope.
Nov 23 21:14:38 compute-1 podman[241667]: 2025-11-23 21:14:38.735747388 +0000 UTC m=+0.022423290 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.847 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.848 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932478.7072747, c833a97e-dc45-489f-98e1-a2d33397836c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.848 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] VM Paused (Lifecycle Event)
Nov 23 21:14:38 compute-1 systemd[1]: Started libcrun container.
Nov 23 21:14:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d68e6c64db3a5cf013782d6e596c5e985b2d9de44852e9704045df6024ec0e3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 21:14:38 compute-1 podman[241667]: 2025-11-23 21:14:38.863983421 +0000 UTC m=+0.150659373 container init e67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 21:14:38 compute-1 podman[241667]: 2025-11-23 21:14:38.869165778 +0000 UTC m=+0.155841690 container start e67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 23 21:14:38 compute-1 neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c[241682]: [NOTICE]   (241686) : New worker (241688) forked
Nov 23 21:14:38 compute-1 neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c[241682]: [NOTICE]   (241686) : Loading success.
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.943 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.946 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932478.7122378, c833a97e-dc45-489f-98e1-a2d33397836c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.947 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] VM Resumed (Lifecycle Event)
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.969 230187 INFO nova.compute.manager [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Took 8.40 seconds to spawn the instance on the hypervisor.
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.969 230187 DEBUG nova.compute.manager [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.970 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:14:38 compute-1 nova_compute[230183]: 2025-11-23 21:14:38.976 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 23 21:14:39 compute-1 nova_compute[230183]: 2025-11-23 21:14:39.004 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 23 21:14:39 compute-1 nova_compute[230183]: 2025-11-23 21:14:39.041 230187 INFO nova.compute.manager [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Took 9.36 seconds to build instance.
Nov 23 21:14:39 compute-1 nova_compute[230183]: 2025-11-23 21:14:39.055 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.443s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.405074) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932479405119, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1110, "num_deletes": 503, "total_data_size": 1749201, "memory_usage": 1777744, "flush_reason": "Manual Compaction"}
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932479416439, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 1006534, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30263, "largest_seqno": 31368, "table_properties": {"data_size": 1002115, "index_size": 1559, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13867, "raw_average_key_size": 19, "raw_value_size": 991050, "raw_average_value_size": 1393, "num_data_blocks": 68, "num_entries": 711, "num_filter_entries": 711, "num_deletions": 503, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763932421, "oldest_key_time": 1763932421, "file_creation_time": 1763932479, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 11445 microseconds, and 6431 cpu microseconds.
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.416515) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 1006534 bytes OK
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.416543) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.418080) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.418104) EVENT_LOG_v1 {"time_micros": 1763932479418096, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.418127) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1742764, prev total WAL file size 1742764, number of live WAL files 2.
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.419128) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(982KB)], [57(16MB)]
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932479419179, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 18727111, "oldest_snapshot_seqno": -1}
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 5784 keys, 12640468 bytes, temperature: kUnknown
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932479605971, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 12640468, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12603743, "index_size": 21191, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14469, "raw_key_size": 149716, "raw_average_key_size": 25, "raw_value_size": 12501307, "raw_average_value_size": 2161, "num_data_blocks": 849, "num_entries": 5784, "num_filter_entries": 5784, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763932479, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.606196) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 12640468 bytes
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.609521) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 100.2 rd, 67.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 16.9 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(31.2) write-amplify(12.6) OK, records in: 6796, records dropped: 1012 output_compression: NoCompression
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.609537) EVENT_LOG_v1 {"time_micros": 1763932479609530, "job": 34, "event": "compaction_finished", "compaction_time_micros": 186845, "compaction_time_cpu_micros": 48574, "output_level": 6, "num_output_files": 1, "total_output_size": 12640468, "num_input_records": 6796, "num_output_records": 5784, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932479609768, "job": 34, "event": "table_file_deletion", "file_number": 59}
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932479612322, "job": 34, "event": "table_file_deletion", "file_number": 57}
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.419036) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.612391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.612397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.612399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.612400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:14:39 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.612402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:14:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:39.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:40 compute-1 ceph-mon[80135]: pgmap v997: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 23 21:14:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:14:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:40.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:14:40 compute-1 nova_compute[230183]: 2025-11-23 21:14:40.674 230187 DEBUG nova.compute.manager [req-2312e00e-4a01-44fd-8fd9-fd08cc61cf80 req-be87a50e-25fc-4c3c-b437-3be6a1a3758f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Received event network-vif-plugged-b71755c1-8148-40c0-884d-aad83ae8602a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:14:40 compute-1 nova_compute[230183]: 2025-11-23 21:14:40.675 230187 DEBUG oslo_concurrency.lockutils [req-2312e00e-4a01-44fd-8fd9-fd08cc61cf80 req-be87a50e-25fc-4c3c-b437-3be6a1a3758f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:14:40 compute-1 nova_compute[230183]: 2025-11-23 21:14:40.675 230187 DEBUG oslo_concurrency.lockutils [req-2312e00e-4a01-44fd-8fd9-fd08cc61cf80 req-be87a50e-25fc-4c3c-b437-3be6a1a3758f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:14:40 compute-1 nova_compute[230183]: 2025-11-23 21:14:40.675 230187 DEBUG oslo_concurrency.lockutils [req-2312e00e-4a01-44fd-8fd9-fd08cc61cf80 req-be87a50e-25fc-4c3c-b437-3be6a1a3758f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:14:40 compute-1 nova_compute[230183]: 2025-11-23 21:14:40.676 230187 DEBUG nova.compute.manager [req-2312e00e-4a01-44fd-8fd9-fd08cc61cf80 req-be87a50e-25fc-4c3c-b437-3be6a1a3758f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] No waiting events found dispatching network-vif-plugged-b71755c1-8148-40c0-884d-aad83ae8602a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:14:40 compute-1 nova_compute[230183]: 2025-11-23 21:14:40.676 230187 WARNING nova.compute.manager [req-2312e00e-4a01-44fd-8fd9-fd08cc61cf80 req-be87a50e-25fc-4c3c-b437-3be6a1a3758f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Received unexpected event network-vif-plugged-b71755c1-8148-40c0-884d-aad83ae8602a for instance with vm_state active and task_state None.
Nov 23 21:14:41 compute-1 nova_compute[230183]: 2025-11-23 21:14:41.503 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:14:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:41.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:14:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:14:42 compute-1 ceph-mon[80135]: pgmap v998: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 23 21:14:42 compute-1 nova_compute[230183]: 2025-11-23 21:14:42.480 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:42.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:43.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:44 compute-1 ceph-mon[80135]: pgmap v999: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 23 21:14:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:14:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:44.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:14:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:45.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:46 compute-1 ovn_controller[132845]: 2025-11-23T21:14:46Z|00109|binding|INFO|Releasing lport 3160bfc6-c855-4bc2-a26d-97781eac404c from this chassis (sb_readonly=0)
Nov 23 21:14:46 compute-1 NetworkManager[49021]: <info>  [1763932486.0500] manager: (patch-br-int-to-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Nov 23 21:14:46 compute-1 NetworkManager[49021]: <info>  [1763932486.0513] manager: (patch-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Nov 23 21:14:46 compute-1 nova_compute[230183]: 2025-11-23 21:14:46.061 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:46 compute-1 ovn_controller[132845]: 2025-11-23T21:14:46Z|00110|binding|INFO|Releasing lport 3160bfc6-c855-4bc2-a26d-97781eac404c from this chassis (sb_readonly=0)
Nov 23 21:14:46 compute-1 nova_compute[230183]: 2025-11-23 21:14:46.084 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:46 compute-1 nova_compute[230183]: 2025-11-23 21:14:46.088 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:46 compute-1 sudo[241702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:14:46 compute-1 sudo[241702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:14:46 compute-1 sudo[241702]: pam_unix(sudo:session): session closed for user root
Nov 23 21:14:46 compute-1 nova_compute[230183]: 2025-11-23 21:14:46.472 230187 DEBUG nova.compute.manager [req-99572406-97f2-4bc5-9f00-73495233bdeb req-e7ae87d9-8c61-4466-a63e-b9beb7ee4e47 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Received event network-changed-b71755c1-8148-40c0-884d-aad83ae8602a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:14:46 compute-1 nova_compute[230183]: 2025-11-23 21:14:46.472 230187 DEBUG nova.compute.manager [req-99572406-97f2-4bc5-9f00-73495233bdeb req-e7ae87d9-8c61-4466-a63e-b9beb7ee4e47 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Refreshing instance network info cache due to event network-changed-b71755c1-8148-40c0-884d-aad83ae8602a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 23 21:14:46 compute-1 nova_compute[230183]: 2025-11-23 21:14:46.473 230187 DEBUG oslo_concurrency.lockutils [req-99572406-97f2-4bc5-9f00-73495233bdeb req-e7ae87d9-8c61-4466-a63e-b9beb7ee4e47 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-c833a97e-dc45-489f-98e1-a2d33397836c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:14:46 compute-1 nova_compute[230183]: 2025-11-23 21:14:46.473 230187 DEBUG oslo_concurrency.lockutils [req-99572406-97f2-4bc5-9f00-73495233bdeb req-e7ae87d9-8c61-4466-a63e-b9beb7ee4e47 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-c833a97e-dc45-489f-98e1-a2d33397836c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:14:46 compute-1 nova_compute[230183]: 2025-11-23 21:14:46.473 230187 DEBUG nova.network.neutron [req-99572406-97f2-4bc5-9f00-73495233bdeb req-e7ae87d9-8c61-4466-a63e-b9beb7ee4e47 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Refreshing network info cache for port b71755c1-8148-40c0-884d-aad83ae8602a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 23 21:14:46 compute-1 ceph-mon[80135]: pgmap v1000: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 23 21:14:46 compute-1 nova_compute[230183]: 2025-11-23 21:14:46.504 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:14:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:46.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:14:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:14:47 compute-1 nova_compute[230183]: 2025-11-23 21:14:47.482 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:47.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:48.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:49 compute-1 ceph-mon[80135]: pgmap v1001: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 23 21:14:49 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:14:49 compute-1 nova_compute[230183]: 2025-11-23 21:14:49.214 230187 DEBUG nova.network.neutron [req-99572406-97f2-4bc5-9f00-73495233bdeb req-e7ae87d9-8c61-4466-a63e-b9beb7ee4e47 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Updated VIF entry in instance network info cache for port b71755c1-8148-40c0-884d-aad83ae8602a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 23 21:14:49 compute-1 nova_compute[230183]: 2025-11-23 21:14:49.215 230187 DEBUG nova.network.neutron [req-99572406-97f2-4bc5-9f00-73495233bdeb req-e7ae87d9-8c61-4466-a63e-b9beb7ee4e47 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Updating instance_info_cache with network_info: [{"id": "b71755c1-8148-40c0-884d-aad83ae8602a", "address": "fa:16:3e:2a:70:ad", "network": {"id": "33439544-e5f9-4500-9a9c-dbc1c4cd858c", "bridge": "br-int", "label": "tempest-network-smoke--1819281856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb71755c1-81", "ovs_interfaceid": "b71755c1-8148-40c0-884d-aad83ae8602a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:14:49 compute-1 nova_compute[230183]: 2025-11-23 21:14:49.231 230187 DEBUG oslo_concurrency.lockutils [req-99572406-97f2-4bc5-9f00-73495233bdeb req-e7ae87d9-8c61-4466-a63e-b9beb7ee4e47 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-c833a97e-dc45-489f-98e1-a2d33397836c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:14:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:14:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:49.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:14:50 compute-1 ceph-mon[80135]: pgmap v1002: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Nov 23 21:14:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:50.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:51.071 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:14:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:51.072 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:14:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:14:51.072 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:14:51 compute-1 nova_compute[230183]: 2025-11-23 21:14:51.506 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:51.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:52 compute-1 ovn_controller[132845]: 2025-11-23T21:14:52Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2a:70:ad 10.100.0.10
Nov 23 21:14:52 compute-1 ovn_controller[132845]: 2025-11-23T21:14:52Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2a:70:ad 10.100.0.10
Nov 23 21:14:52 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:14:52 compute-1 nova_compute[230183]: 2025-11-23 21:14:52.484 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:52 compute-1 ceph-mon[80135]: pgmap v1003: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Nov 23 21:14:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:52.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:14:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:53.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:14:54 compute-1 ceph-mon[80135]: pgmap v1004: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Nov 23 21:14:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:14:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:54.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:14:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:55.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:56 compute-1 nova_compute[230183]: 2025-11-23 21:14:56.508 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:56 compute-1 ceph-mon[80135]: pgmap v1005: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Nov 23 21:14:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:14:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:56.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:14:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:14:57 compute-1 nova_compute[230183]: 2025-11-23 21:14:57.486 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:14:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:14:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:57.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:14:58 compute-1 ceph-mon[80135]: pgmap v1006: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 23 21:14:58 compute-1 podman[241736]: 2025-11-23 21:14:58.664646842 +0000 UTC m=+0.066504925 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 23 21:14:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:14:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:58.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:14:58 compute-1 podman[241735]: 2025-11-23 21:14:58.711170425 +0000 UTC m=+0.112087813 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 21:14:58 compute-1 nova_compute[230183]: 2025-11-23 21:14:58.920 230187 INFO nova.compute.manager [None req-4864e87e-b5c5-41e1-a783-f5e802fe6f1e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Get console output
Nov 23 21:14:58 compute-1 nova_compute[230183]: 2025-11-23 21:14:58.925 234120 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 23 21:14:59 compute-1 ovn_controller[132845]: 2025-11-23T21:14:59Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2a:70:ad 10.100.0.10
Nov 23 21:14:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:14:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:14:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:59.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:15:00 compute-1 ceph-mon[80135]: pgmap v1007: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 23 21:15:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:15:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:00.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:15:01 compute-1 ovn_controller[132845]: 2025-11-23T21:15:01Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2a:70:ad 10.100.0.10
Nov 23 21:15:01 compute-1 nova_compute[230183]: 2025-11-23 21:15:01.511 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:01.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:15:02 compute-1 nova_compute[230183]: 2025-11-23 21:15:02.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:15:02 compute-1 nova_compute[230183]: 2025-11-23 21:15:02.489 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:02.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:02 compute-1 ceph-mon[80135]: pgmap v1008: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 23 21:15:03 compute-1 podman[241783]: 2025-11-23 21:15:03.648631436 +0000 UTC m=+0.062325015 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 23 21:15:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:03.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:15:04 compute-1 nova_compute[230183]: 2025-11-23 21:15:04.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:15:04 compute-1 nova_compute[230183]: 2025-11-23 21:15:04.450 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:15:04 compute-1 nova_compute[230183]: 2025-11-23 21:15:04.450 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:15:04 compute-1 nova_compute[230183]: 2025-11-23 21:15:04.450 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:15:04 compute-1 nova_compute[230183]: 2025-11-23 21:15:04.451 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:15:04 compute-1 nova_compute[230183]: 2025-11-23 21:15:04.451 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:15:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:04.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:04 compute-1 ceph-mon[80135]: pgmap v1009: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 23 21:15:04 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:15:04 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3371525506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:15:04 compute-1 nova_compute[230183]: 2025-11-23 21:15:04.915 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:15:04 compute-1 nova_compute[230183]: 2025-11-23 21:15:04.973 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 23 21:15:04 compute-1 nova_compute[230183]: 2025-11-23 21:15:04.973 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 23 21:15:05 compute-1 nova_compute[230183]: 2025-11-23 21:15:05.148 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:15:05 compute-1 nova_compute[230183]: 2025-11-23 21:15:05.149 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4734MB free_disk=59.942752838134766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:15:05 compute-1 nova_compute[230183]: 2025-11-23 21:15:05.149 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:15:05 compute-1 nova_compute[230183]: 2025-11-23 21:15:05.149 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:15:05 compute-1 nova_compute[230183]: 2025-11-23 21:15:05.205 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Instance c833a97e-dc45-489f-98e1-a2d33397836c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 23 21:15:05 compute-1 nova_compute[230183]: 2025-11-23 21:15:05.205 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:15:05 compute-1 nova_compute[230183]: 2025-11-23 21:15:05.206 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:15:05 compute-1 nova_compute[230183]: 2025-11-23 21:15:05.236 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:15:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:05.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:05 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:15:05 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3922220823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:15:05 compute-1 nova_compute[230183]: 2025-11-23 21:15:05.702 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:15:05 compute-1 nova_compute[230183]: 2025-11-23 21:15:05.710 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:15:05 compute-1 nova_compute[230183]: 2025-11-23 21:15:05.734 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:15:05 compute-1 nova_compute[230183]: 2025-11-23 21:15:05.762 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 23 21:15:05 compute-1 nova_compute[230183]: 2025-11-23 21:15:05.763 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:15:05 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3371525506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:15:05 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3922220823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:15:05 compute-1 nova_compute[230183]: 2025-11-23 21:15:05.792 230187 DEBUG nova.compute.manager [req-1905cd28-440c-4fba-b36a-091cd5409f2e req-19acd4d8-dc90-4efa-938b-114b92b37572 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Received event network-changed-b71755c1-8148-40c0-884d-aad83ae8602a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:15:05 compute-1 nova_compute[230183]: 2025-11-23 21:15:05.793 230187 DEBUG nova.compute.manager [req-1905cd28-440c-4fba-b36a-091cd5409f2e req-19acd4d8-dc90-4efa-938b-114b92b37572 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Refreshing instance network info cache due to event network-changed-b71755c1-8148-40c0-884d-aad83ae8602a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 23 21:15:05 compute-1 nova_compute[230183]: 2025-11-23 21:15:05.793 230187 DEBUG oslo_concurrency.lockutils [req-1905cd28-440c-4fba-b36a-091cd5409f2e req-19acd4d8-dc90-4efa-938b-114b92b37572 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-c833a97e-dc45-489f-98e1-a2d33397836c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:15:05 compute-1 nova_compute[230183]: 2025-11-23 21:15:05.794 230187 DEBUG oslo_concurrency.lockutils [req-1905cd28-440c-4fba-b36a-091cd5409f2e req-19acd4d8-dc90-4efa-938b-114b92b37572 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-c833a97e-dc45-489f-98e1-a2d33397836c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:15:05 compute-1 nova_compute[230183]: 2025-11-23 21:15:05.794 230187 DEBUG nova.network.neutron [req-1905cd28-440c-4fba-b36a-091cd5409f2e req-19acd4d8-dc90-4efa-938b-114b92b37572 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Refreshing network info cache for port b71755c1-8148-40c0-884d-aad83ae8602a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 23 21:15:05 compute-1 ovn_controller[132845]: 2025-11-23T21:15:05Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2a:70:ad 10.100.0.10
Nov 23 21:15:05 compute-1 nova_compute[230183]: 2025-11-23 21:15:05.945 230187 DEBUG oslo_concurrency.lockutils [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "c833a97e-dc45-489f-98e1-a2d33397836c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:15:05 compute-1 nova_compute[230183]: 2025-11-23 21:15:05.945 230187 DEBUG oslo_concurrency.lockutils [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:15:05 compute-1 nova_compute[230183]: 2025-11-23 21:15:05.946 230187 DEBUG oslo_concurrency.lockutils [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:15:05 compute-1 nova_compute[230183]: 2025-11-23 21:15:05.946 230187 DEBUG oslo_concurrency.lockutils [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:15:05 compute-1 nova_compute[230183]: 2025-11-23 21:15:05.946 230187 DEBUG oslo_concurrency.lockutils [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:15:05 compute-1 nova_compute[230183]: 2025-11-23 21:15:05.947 230187 INFO nova.compute.manager [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Terminating instance
Nov 23 21:15:05 compute-1 nova_compute[230183]: 2025-11-23 21:15:05.948 230187 DEBUG nova.compute.manager [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 23 21:15:05 compute-1 kernel: tapb71755c1-81 (unregistering): left promiscuous mode
Nov 23 21:15:06 compute-1 NetworkManager[49021]: <info>  [1763932506.0014] device (tapb71755c1-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 23 21:15:06 compute-1 ovn_controller[132845]: 2025-11-23T21:15:06Z|00111|binding|INFO|Releasing lport b71755c1-8148-40c0-884d-aad83ae8602a from this chassis (sb_readonly=0)
Nov 23 21:15:06 compute-1 ovn_controller[132845]: 2025-11-23T21:15:06Z|00112|binding|INFO|Setting lport b71755c1-8148-40c0-884d-aad83ae8602a down in Southbound
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.005 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:06 compute-1 ovn_controller[132845]: 2025-11-23T21:15:06Z|00113|binding|INFO|Removing iface tapb71755c1-81 ovn-installed in OVS
Nov 23 21:15:06 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.015 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:70:ad 10.100.0.10'], port_security=['fa:16:3e:2a:70:ad 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c833a97e-dc45-489f-98e1-a2d33397836c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33439544-e5f9-4500-9a9c-dbc1c4cd858c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '01afc80e-05e3-4e44-a9a7-ca2439f76ab4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=94ce2d65-f870-4d9e-a5f2-e431f68e3936, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=b71755c1-8148-40c0-884d-aad83ae8602a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:15:06 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.016 142158 INFO neutron.agent.ovn.metadata.agent [-] Port b71755c1-8148-40c0-884d-aad83ae8602a in datapath 33439544-e5f9-4500-9a9c-dbc1c4cd858c unbound from our chassis
Nov 23 21:15:06 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.017 142158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 33439544-e5f9-4500-9a9c-dbc1c4cd858c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 23 21:15:06 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.019 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[9f69e52c-bd29-4d17-84e7-9671b44821ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:15:06 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.020 142158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c namespace which is not needed anymore
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.025 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:06 compute-1 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Nov 23 21:15:06 compute-1 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000a.scope: Consumed 14.183s CPU time.
Nov 23 21:15:06 compute-1 systemd-machined[193469]: Machine qemu-6-instance-0000000a terminated.
Nov 23 21:15:06 compute-1 neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c[241682]: [NOTICE]   (241686) : haproxy version is 2.8.14-c23fe91
Nov 23 21:15:06 compute-1 neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c[241682]: [NOTICE]   (241686) : path to executable is /usr/sbin/haproxy
Nov 23 21:15:06 compute-1 neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c[241682]: [WARNING]  (241686) : Exiting Master process...
Nov 23 21:15:06 compute-1 neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c[241682]: [ALERT]    (241686) : Current worker (241688) exited with code 143 (Terminated)
Nov 23 21:15:06 compute-1 neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c[241682]: [WARNING]  (241686) : All workers exited. Exiting... (0)
Nov 23 21:15:06 compute-1 systemd[1]: libpod-e67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53.scope: Deactivated successfully.
Nov 23 21:15:06 compute-1 podman[241875]: 2025-11-23 21:15:06.157816036 +0000 UTC m=+0.044478718 container died e67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.181 230187 INFO nova.virt.libvirt.driver [-] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Instance destroyed successfully.
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.181 230187 DEBUG nova.objects.instance [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'resources' on Instance uuid c833a97e-dc45-489f-98e1-a2d33397836c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:15:06 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53-userdata-shm.mount: Deactivated successfully.
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.193 230187 DEBUG nova.virt.libvirt.vif [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-582512634',display_name='tempest-TestNetworkBasicOps-server-582512634',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-582512634',id=10,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCQNYwhSj43+WrihAMXXrcf3ffitbakZUmwhOuijrPcqM40TmVQc3wMfjU/cZyNHNeBKw0TKec9vXExOxmFIsncMN4D0yIYffuxIytj1M98N5vK6pCD4pL97G7XeskRufg==',key_name='tempest-TestNetworkBasicOps-1166238543',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:14:38Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-2k77ohtd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:14:39Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=c833a97e-dc45-489f-98e1-a2d33397836c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b71755c1-8148-40c0-884d-aad83ae8602a", "address": "fa:16:3e:2a:70:ad", "network": {"id": "33439544-e5f9-4500-9a9c-dbc1c4cd858c", "bridge": "br-int", "label": "tempest-network-smoke--1819281856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb71755c1-81", "ovs_interfaceid": "b71755c1-8148-40c0-884d-aad83ae8602a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.194 230187 DEBUG nova.network.os_vif_util [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "b71755c1-8148-40c0-884d-aad83ae8602a", "address": "fa:16:3e:2a:70:ad", "network": {"id": "33439544-e5f9-4500-9a9c-dbc1c4cd858c", "bridge": "br-int", "label": "tempest-network-smoke--1819281856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb71755c1-81", "ovs_interfaceid": "b71755c1-8148-40c0-884d-aad83ae8602a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:15:06 compute-1 systemd[1]: var-lib-containers-storage-overlay-1d68e6c64db3a5cf013782d6e596c5e985b2d9de44852e9704045df6024ec0e3-merged.mount: Deactivated successfully.
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.195 230187 DEBUG nova.network.os_vif_util [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2a:70:ad,bridge_name='br-int',has_traffic_filtering=True,id=b71755c1-8148-40c0-884d-aad83ae8602a,network=Network(33439544-e5f9-4500-9a9c-dbc1c4cd858c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb71755c1-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.195 230187 DEBUG os_vif [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2a:70:ad,bridge_name='br-int',has_traffic_filtering=True,id=b71755c1-8148-40c0-884d-aad83ae8602a,network=Network(33439544-e5f9-4500-9a9c-dbc1c4cd858c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb71755c1-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.197 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.197 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb71755c1-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.201 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.202 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.204 230187 INFO os_vif [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2a:70:ad,bridge_name='br-int',has_traffic_filtering=True,id=b71755c1-8148-40c0-884d-aad83ae8602a,network=Network(33439544-e5f9-4500-9a9c-dbc1c4cd858c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb71755c1-81')
Nov 23 21:15:06 compute-1 podman[241875]: 2025-11-23 21:15:06.209121126 +0000 UTC m=+0.095783808 container cleanup e67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 21:15:06 compute-1 systemd[1]: libpod-conmon-e67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53.scope: Deactivated successfully.
Nov 23 21:15:06 compute-1 podman[241929]: 2025-11-23 21:15:06.280060149 +0000 UTC m=+0.045805613 container remove e67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 21:15:06 compute-1 sudo[241930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:15:06 compute-1 sudo[241930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:15:06 compute-1 sudo[241930]: pam_unix(sudo:session): session closed for user root
Nov 23 21:15:06 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.286 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[38eb9252-d571-4eff-960b-d3abad976ff4]: (4, ('Sun Nov 23 09:15:06 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c (e67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53)\ne67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53\nSun Nov 23 09:15:06 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c (e67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53)\ne67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:15:06 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.287 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5b1d1a39-69a2-4034-9593-348ea0f86c6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:15:06 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.288 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33439544-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:15:06 compute-1 kernel: tap33439544-e0: left promiscuous mode
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.291 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.302 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:06 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.304 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[78d82b9c-666f-4c9e-a58e-f55e8c47f97a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:15:06 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.318 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[8cc9d944-61ab-4875-9382-f030aff5dc8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:15:06 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.319 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5e592c75-467a-4a07-bf7b-924b1996a8f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:15:06 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.332 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5173d6b4-1693-41f8-affa-9aa1a505fe3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442668, 'reachable_time': 43259, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 241970, 'error': None, 'target': 'ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:15:06 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.334 142272 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 23 21:15:06 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.334 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[32899b22-19ee-411f-aca7-5d7205ffdd4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:15:06 compute-1 systemd[1]: run-netns-ovnmeta\x2d33439544\x2de5f9\x2d4500\x2d9a9c\x2ddbc1c4cd858c.mount: Deactivated successfully.
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.484 230187 DEBUG nova.compute.manager [req-0146ef5e-3c77-43ba-99e3-cc777819796e req-00cf655d-fd98-4b63-8758-c9dbaee84980 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Received event network-vif-unplugged-b71755c1-8148-40c0-884d-aad83ae8602a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.484 230187 DEBUG oslo_concurrency.lockutils [req-0146ef5e-3c77-43ba-99e3-cc777819796e req-00cf655d-fd98-4b63-8758-c9dbaee84980 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.485 230187 DEBUG oslo_concurrency.lockutils [req-0146ef5e-3c77-43ba-99e3-cc777819796e req-00cf655d-fd98-4b63-8758-c9dbaee84980 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.485 230187 DEBUG oslo_concurrency.lockutils [req-0146ef5e-3c77-43ba-99e3-cc777819796e req-00cf655d-fd98-4b63-8758-c9dbaee84980 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.486 230187 DEBUG nova.compute.manager [req-0146ef5e-3c77-43ba-99e3-cc777819796e req-00cf655d-fd98-4b63-8758-c9dbaee84980 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] No waiting events found dispatching network-vif-unplugged-b71755c1-8148-40c0-884d-aad83ae8602a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.486 230187 DEBUG nova.compute.manager [req-0146ef5e-3c77-43ba-99e3-cc777819796e req-00cf655d-fd98-4b63-8758-c9dbaee84980 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Received event network-vif-unplugged-b71755c1-8148-40c0-884d-aad83ae8602a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.668 230187 INFO nova.virt.libvirt.driver [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Deleting instance files /var/lib/nova/instances/c833a97e-dc45-489f-98e1-a2d33397836c_del
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.669 230187 INFO nova.virt.libvirt.driver [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Deletion of /var/lib/nova/instances/c833a97e-dc45-489f-98e1-a2d33397836c_del complete
Nov 23 21:15:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:15:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:06.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.717 230187 INFO nova.compute.manager [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Took 0.77 seconds to destroy the instance on the hypervisor.
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.717 230187 DEBUG oslo.service.loopingcall [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.718 230187 DEBUG nova.compute.manager [-] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.718 230187 DEBUG nova.network.neutron [-] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.759 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.760 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.760 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:15:06 compute-1 nova_compute[230183]: 2025-11-23 21:15:06.761 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 23 21:15:06 compute-1 ceph-mon[80135]: pgmap v1010: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 23 21:15:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:15:07 compute-1 nova_compute[230183]: 2025-11-23 21:15:07.350 230187 DEBUG nova.network.neutron [req-1905cd28-440c-4fba-b36a-091cd5409f2e req-19acd4d8-dc90-4efa-938b-114b92b37572 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Updated VIF entry in instance network info cache for port b71755c1-8148-40c0-884d-aad83ae8602a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 23 21:15:07 compute-1 nova_compute[230183]: 2025-11-23 21:15:07.351 230187 DEBUG nova.network.neutron [req-1905cd28-440c-4fba-b36a-091cd5409f2e req-19acd4d8-dc90-4efa-938b-114b92b37572 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Updating instance_info_cache with network_info: [{"id": "b71755c1-8148-40c0-884d-aad83ae8602a", "address": "fa:16:3e:2a:70:ad", "network": {"id": "33439544-e5f9-4500-9a9c-dbc1c4cd858c", "bridge": "br-int", "label": "tempest-network-smoke--1819281856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb71755c1-81", "ovs_interfaceid": "b71755c1-8148-40c0-884d-aad83ae8602a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:15:07 compute-1 nova_compute[230183]: 2025-11-23 21:15:07.373 230187 DEBUG oslo_concurrency.lockutils [req-1905cd28-440c-4fba-b36a-091cd5409f2e req-19acd4d8-dc90-4efa-938b-114b92b37572 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-c833a97e-dc45-489f-98e1-a2d33397836c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:15:07 compute-1 nova_compute[230183]: 2025-11-23 21:15:07.428 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:15:07 compute-1 nova_compute[230183]: 2025-11-23 21:15:07.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 23 21:15:07 compute-1 nova_compute[230183]: 2025-11-23 21:15:07.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 23 21:15:07 compute-1 nova_compute[230183]: 2025-11-23 21:15:07.448 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 23 21:15:07 compute-1 nova_compute[230183]: 2025-11-23 21:15:07.448 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 23 21:15:07 compute-1 nova_compute[230183]: 2025-11-23 21:15:07.449 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:15:07 compute-1 nova_compute[230183]: 2025-11-23 21:15:07.449 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:15:07 compute-1 sudo[241972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:15:07 compute-1 sudo[241972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:15:07 compute-1 sudo[241972]: pam_unix(sudo:session): session closed for user root
Nov 23 21:15:07 compute-1 nova_compute[230183]: 2025-11-23 21:15:07.490 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:07 compute-1 sudo[241997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 23 21:15:07 compute-1 sudo[241997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:15:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:07.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:07 compute-1 nova_compute[230183]: 2025-11-23 21:15:07.710 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:07 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:07.711 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:15:07 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:07.712 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 23 21:15:07 compute-1 nova_compute[230183]: 2025-11-23 21:15:07.749 230187 DEBUG nova.network.neutron [-] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:15:07 compute-1 nova_compute[230183]: 2025-11-23 21:15:07.765 230187 INFO nova.compute.manager [-] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Took 1.05 seconds to deallocate network for instance.
Nov 23 21:15:07 compute-1 ceph-mon[80135]: pgmap v1011: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 12 KiB/s wr, 2 op/s
Nov 23 21:15:07 compute-1 nova_compute[230183]: 2025-11-23 21:15:07.813 230187 DEBUG oslo_concurrency.lockutils [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:15:07 compute-1 nova_compute[230183]: 2025-11-23 21:15:07.813 230187 DEBUG oslo_concurrency.lockutils [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:15:07 compute-1 nova_compute[230183]: 2025-11-23 21:15:07.856 230187 DEBUG oslo_concurrency.processutils [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:15:08 compute-1 podman[242106]: 2025-11-23 21:15:08.026857729 +0000 UTC m=+0.053493359 container exec e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 21:15:08 compute-1 podman[242106]: 2025-11-23 21:15:08.122366398 +0000 UTC m=+0.149002028 container exec_died e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Nov 23 21:15:08 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:15:08 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3253881524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:15:08 compute-1 nova_compute[230183]: 2025-11-23 21:15:08.324 230187 DEBUG oslo_concurrency.processutils [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:15:08 compute-1 nova_compute[230183]: 2025-11-23 21:15:08.331 230187 DEBUG nova.compute.provider_tree [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:15:08 compute-1 nova_compute[230183]: 2025-11-23 21:15:08.345 230187 DEBUG nova.scheduler.client.report [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:15:08 compute-1 nova_compute[230183]: 2025-11-23 21:15:08.365 230187 DEBUG oslo_concurrency.lockutils [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:15:08 compute-1 nova_compute[230183]: 2025-11-23 21:15:08.398 230187 INFO nova.scheduler.client.report [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Deleted allocations for instance c833a97e-dc45-489f-98e1-a2d33397836c
Nov 23 21:15:08 compute-1 nova_compute[230183]: 2025-11-23 21:15:08.490 230187 DEBUG oslo_concurrency.lockutils [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:15:08 compute-1 podman[242239]: 2025-11-23 21:15:08.564792378 +0000 UTC m=+0.051799973 container exec 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 21:15:08 compute-1 nova_compute[230183]: 2025-11-23 21:15:08.593 230187 DEBUG nova.compute.manager [req-163a40c2-c79b-43cc-86ed-562d20f882d2 req-b86bc670-be50-442c-94e0-a84d124cff40 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Received event network-vif-plugged-b71755c1-8148-40c0-884d-aad83ae8602a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:15:08 compute-1 nova_compute[230183]: 2025-11-23 21:15:08.594 230187 DEBUG oslo_concurrency.lockutils [req-163a40c2-c79b-43cc-86ed-562d20f882d2 req-b86bc670-be50-442c-94e0-a84d124cff40 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:15:08 compute-1 nova_compute[230183]: 2025-11-23 21:15:08.594 230187 DEBUG oslo_concurrency.lockutils [req-163a40c2-c79b-43cc-86ed-562d20f882d2 req-b86bc670-be50-442c-94e0-a84d124cff40 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:15:08 compute-1 nova_compute[230183]: 2025-11-23 21:15:08.594 230187 DEBUG oslo_concurrency.lockutils [req-163a40c2-c79b-43cc-86ed-562d20f882d2 req-b86bc670-be50-442c-94e0-a84d124cff40 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:15:08 compute-1 nova_compute[230183]: 2025-11-23 21:15:08.594 230187 DEBUG nova.compute.manager [req-163a40c2-c79b-43cc-86ed-562d20f882d2 req-b86bc670-be50-442c-94e0-a84d124cff40 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] No waiting events found dispatching network-vif-plugged-b71755c1-8148-40c0-884d-aad83ae8602a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:15:08 compute-1 nova_compute[230183]: 2025-11-23 21:15:08.594 230187 WARNING nova.compute.manager [req-163a40c2-c79b-43cc-86ed-562d20f882d2 req-b86bc670-be50-442c-94e0-a84d124cff40 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Received unexpected event network-vif-plugged-b71755c1-8148-40c0-884d-aad83ae8602a for instance with vm_state deleted and task_state None.
Nov 23 21:15:08 compute-1 nova_compute[230183]: 2025-11-23 21:15:08.595 230187 DEBUG nova.compute.manager [req-163a40c2-c79b-43cc-86ed-562d20f882d2 req-b86bc670-be50-442c-94e0-a84d124cff40 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Received event network-vif-deleted-b71755c1-8148-40c0-884d-aad83ae8602a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:15:08 compute-1 podman[242239]: 2025-11-23 21:15:08.602563017 +0000 UTC m=+0.089570582 container exec_died 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 21:15:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:15:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:08.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:15:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/3390450181' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:15:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/3390450181' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:15:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3253881524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:15:08 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:15:08 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:15:08 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 23 21:15:09 compute-1 podman[242370]: 2025-11-23 21:15:09.079589661 +0000 UTC m=+0.052102292 container exec 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei)
Nov 23 21:15:09 compute-1 podman[242370]: 2025-11-23 21:15:09.089251219 +0000 UTC m=+0.061763850 container exec_died 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei)
Nov 23 21:15:09 compute-1 podman[242436]: 2025-11-23 21:15:09.301959397 +0000 UTC m=+0.048276860 container exec 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc, io.openshift.tags=Ceph keepalived, version=2.2.4, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.buildah.version=1.28.2, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, distribution-scope=public)
Nov 23 21:15:09 compute-1 podman[242436]: 2025-11-23 21:15:09.316199557 +0000 UTC m=+0.062516990 container exec_died 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, description=keepalived for Ceph, vcs-type=git, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, architecture=x86_64, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc.)
Nov 23 21:15:09 compute-1 sudo[241997]: pam_unix(sudo:session): session closed for user root
Nov 23 21:15:09 compute-1 nova_compute[230183]: 2025-11-23 21:15:09.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:15:09 compute-1 sudo[242468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:15:09 compute-1 sudo[242468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:15:09 compute-1 sudo[242468]: pam_unix(sudo:session): session closed for user root
Nov 23 21:15:09 compute-1 sudo[242493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 21:15:09 compute-1 sudo[242493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:15:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:15:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:09.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:15:10 compute-1 sudo[242493]: pam_unix(sudo:session): session closed for user root
Nov 23 21:15:10 compute-1 sudo[242550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:15:10 compute-1 sudo[242550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:15:10 compute-1 sudo[242550]: pam_unix(sudo:session): session closed for user root
Nov 23 21:15:10 compute-1 sudo[242575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Nov 23 21:15:10 compute-1 sudo[242575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:15:10 compute-1 ceph-mon[80135]: pgmap v1012: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 19 KiB/s wr, 30 op/s
Nov 23 21:15:10 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:15:10 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:15:10 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2727990036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:15:10 compute-1 sudo[242575]: pam_unix(sudo:session): session closed for user root
Nov 23 21:15:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:10.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:11 compute-1 nova_compute[230183]: 2025-11-23 21:15:11.238 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:11 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:15:11 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:15:11 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 23 21:15:11 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1766472216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:15:11 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:15:11 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:15:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:15:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:11.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:15:11 compute-1 nova_compute[230183]: 2025-11-23 21:15:11.699 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:11 compute-1 nova_compute[230183]: 2025-11-23 21:15:11.783 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:15:12 compute-1 nova_compute[230183]: 2025-11-23 21:15:12.492 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:12 compute-1 ceph-mon[80135]: pgmap v1013: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Nov 23 21:15:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:12.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:13 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:15:13 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:15:13 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 23 21:15:13 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:15:13 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 21:15:13 compute-1 ceph-mon[80135]: pgmap v1014: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 6.9 KiB/s wr, 31 op/s
Nov 23 21:15:13 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:15:13 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:15:13 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 21:15:13 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 21:15:13 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:15:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:13.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:14 compute-1 ceph-mon[80135]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Nov 23 21:15:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:14.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:15 compute-1 ceph-mon[80135]: pgmap v1015: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 6.9 KiB/s wr, 31 op/s
Nov 23 21:15:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:15.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:15 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:15.714 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:15:16 compute-1 nova_compute[230183]: 2025-11-23 21:15:16.243 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:16.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:16 compute-1 sudo[242622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 21:15:16 compute-1 sudo[242622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:15:17 compute-1 sudo[242622]: pam_unix(sudo:session): session closed for user root
Nov 23 21:15:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:15:17 compute-1 nova_compute[230183]: 2025-11-23 21:15:17.494 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:17.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:17 compute-1 ceph-mon[80135]: pgmap v1016: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 6.9 KiB/s wr, 30 op/s
Nov 23 21:15:17 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:15:17 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:15:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:15:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:18.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:15:19 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:15:19 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:15:19 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3133778263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:15:19 compute-1 ceph-mon[80135]: pgmap v1017: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 6.9 KiB/s wr, 30 op/s
Nov 23 21:15:19 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/4161376870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:15:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:19.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:20.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:21 compute-1 nova_compute[230183]: 2025-11-23 21:15:21.175 230187 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763932506.173835, c833a97e-dc45-489f-98e1-a2d33397836c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:15:21 compute-1 nova_compute[230183]: 2025-11-23 21:15:21.175 230187 INFO nova.compute.manager [-] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] VM Stopped (Lifecycle Event)
Nov 23 21:15:21 compute-1 nova_compute[230183]: 2025-11-23 21:15:21.202 230187 DEBUG nova.compute.manager [None req-25a00114-cb4e-4e3e-9f7a-324ada7a1362 - - - - - -] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:15:21 compute-1 nova_compute[230183]: 2025-11-23 21:15:21.246 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:15:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:21.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:15:21 compute-1 ceph-mon[80135]: pgmap v1018: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 21:15:22 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:15:22 compute-1 nova_compute[230183]: 2025-11-23 21:15:22.496 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:22.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:23.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:23 compute-1 ceph-mon[80135]: pgmap v1019: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 21:15:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:24.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:24 compute-1 ceph-mon[80135]: pgmap v1020: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:15:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:25.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:26 compute-1 nova_compute[230183]: 2025-11-23 21:15:26.249 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:26 compute-1 sudo[242652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:15:26 compute-1 sudo[242652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:15:26 compute-1 sudo[242652]: pam_unix(sudo:session): session closed for user root
Nov 23 21:15:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:26.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:27 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:15:27 compute-1 nova_compute[230183]: 2025-11-23 21:15:27.498 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:27.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:27 compute-1 ceph-mon[80135]: pgmap v1021: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:15:27 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2993740014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:15:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:28.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:29 compute-1 podman[242679]: 2025-11-23 21:15:29.677753669 +0000 UTC m=+0.088989134 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 21:15:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:29.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:29 compute-1 ceph-mon[80135]: pgmap v1022: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:15:29 compute-1 podman[242678]: 2025-11-23 21:15:29.762653326 +0000 UTC m=+0.173848140 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 23 21:15:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:30.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:30 compute-1 ceph-mon[80135]: pgmap v1023: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 21:15:31 compute-1 nova_compute[230183]: 2025-11-23 21:15:31.251 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:31.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:32 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:15:32 compute-1 nova_compute[230183]: 2025-11-23 21:15:32.499 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:15:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:32.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:15:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:33.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:33 compute-1 ceph-mon[80135]: pgmap v1024: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 21:15:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:15:33 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1002385892' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:15:34 compute-1 podman[242725]: 2025-11-23 21:15:34.64472733 +0000 UTC m=+0.059562262 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 23 21:15:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:15:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:34.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:15:34 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3826056991' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:15:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:35.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:35 compute-1 ceph-mon[80135]: pgmap v1025: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 21:15:36 compute-1 nova_compute[230183]: 2025-11-23 21:15:36.255 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:36.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:36 compute-1 ceph-mon[80135]: pgmap v1026: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 21:15:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:15:37 compute-1 nova_compute[230183]: 2025-11-23 21:15:37.502 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:15:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:37.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:15:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:38.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:39.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:39 compute-1 ceph-mon[80135]: pgmap v1027: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 21:15:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:40.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:41 compute-1 nova_compute[230183]: 2025-11-23 21:15:41.259 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:41.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:41 compute-1 ceph-mon[80135]: pgmap v1028: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 23 21:15:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:15:42 compute-1 nova_compute[230183]: 2025-11-23 21:15:42.503 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:15:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:42.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:15:42 compute-1 ceph-mon[80135]: pgmap v1029: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 23 21:15:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:43.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:44 compute-1 sshd-session[242750]: Invalid user sol from 92.118.39.92 port 43046
Nov 23 21:15:44 compute-1 nova_compute[230183]: 2025-11-23 21:15:44.481 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "4384cda9-2a35-4df4-84b1-a045a41852ac" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:15:44 compute-1 nova_compute[230183]: 2025-11-23 21:15:44.481 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:15:44 compute-1 sshd-session[242750]: Connection closed by invalid user sol 92.118.39.92 port 43046 [preauth]
Nov 23 21:15:44 compute-1 nova_compute[230183]: 2025-11-23 21:15:44.510 230187 DEBUG nova.compute.manager [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 23 21:15:44 compute-1 nova_compute[230183]: 2025-11-23 21:15:44.626 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:15:44 compute-1 nova_compute[230183]: 2025-11-23 21:15:44.626 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:15:44 compute-1 nova_compute[230183]: 2025-11-23 21:15:44.637 230187 DEBUG nova.virt.hardware [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 23 21:15:44 compute-1 nova_compute[230183]: 2025-11-23 21:15:44.638 230187 INFO nova.compute.claims [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Claim successful on node compute-1.ctlplane.example.com
Nov 23 21:15:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:15:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:44.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:15:44 compute-1 nova_compute[230183]: 2025-11-23 21:15:44.743 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:15:45 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:15:45 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1589551055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.218 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.224 230187 DEBUG nova.compute.provider_tree [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.239 230187 DEBUG nova.scheduler.client.report [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.261 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.262 230187 DEBUG nova.compute.manager [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.328 230187 DEBUG nova.compute.manager [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.328 230187 DEBUG nova.network.neutron [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.341 230187 INFO nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.359 230187 DEBUG nova.compute.manager [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.439 230187 DEBUG nova.compute.manager [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.440 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.441 230187 INFO nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Creating image(s)
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.474 230187 DEBUG nova.storage.rbd_utils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4384cda9-2a35-4df4-84b1-a045a41852ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.498 230187 DEBUG nova.storage.rbd_utils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4384cda9-2a35-4df4-84b1-a045a41852ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.521 230187 DEBUG nova.storage.rbd_utils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4384cda9-2a35-4df4-84b1-a045a41852ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.523 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.572 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.573 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.574 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.574 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.597 230187 DEBUG nova.storage.rbd_utils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4384cda9-2a35-4df4-84b1-a045a41852ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.599 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 4384cda9-2a35-4df4-84b1-a045a41852ac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:15:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:45.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:45 compute-1 ceph-mon[80135]: pgmap v1030: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Nov 23 21:15:45 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1589551055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.838 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 4384cda9-2a35-4df4-84b1-a045a41852ac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.238s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.889 230187 DEBUG nova.storage.rbd_utils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] resizing rbd image 4384cda9-2a35-4df4-84b1-a045a41852ac_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.972 230187 DEBUG nova.objects.instance [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'migration_context' on Instance uuid 4384cda9-2a35-4df4-84b1-a045a41852ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.983 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.983 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Ensure instance console log exists: /var/lib/nova/instances/4384cda9-2a35-4df4-84b1-a045a41852ac/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.983 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.984 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:15:45 compute-1 nova_compute[230183]: 2025-11-23 21:15:45.984 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:15:46 compute-1 nova_compute[230183]: 2025-11-23 21:15:46.075 230187 DEBUG nova.policy [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9fb5352c62684f2ba3a326a953a10dfe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '782593db60784ab8bff41fe87d72ff5f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 23 21:15:46 compute-1 nova_compute[230183]: 2025-11-23 21:15:46.262 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:46 compute-1 sudo[242941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:15:46 compute-1 sudo[242941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:15:46 compute-1 sudo[242941]: pam_unix(sudo:session): session closed for user root
Nov 23 21:15:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:15:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:46.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:15:46 compute-1 ceph-mon[80135]: pgmap v1031: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 23 21:15:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:15:47 compute-1 nova_compute[230183]: 2025-11-23 21:15:47.505 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:47.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:15:48 compute-1 nova_compute[230183]: 2025-11-23 21:15:48.468 230187 DEBUG nova.network.neutron [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Successfully created port: deb2e9cc-993f-4f9a-934e-0921fdf22170 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 23 21:15:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:15:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:48.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:15:48 compute-1 ovn_controller[132845]: 2025-11-23T21:15:48Z|00114|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 23 21:15:49 compute-1 ceph-mon[80135]: pgmap v1032: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 23 21:15:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:49.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:50 compute-1 nova_compute[230183]: 2025-11-23 21:15:50.471 230187 DEBUG nova.network.neutron [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Successfully updated port: deb2e9cc-993f-4f9a-934e-0921fdf22170 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 23 21:15:50 compute-1 nova_compute[230183]: 2025-11-23 21:15:50.484 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:15:50 compute-1 nova_compute[230183]: 2025-11-23 21:15:50.485 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquired lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:15:50 compute-1 nova_compute[230183]: 2025-11-23 21:15:50.485 230187 DEBUG nova.network.neutron [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 23 21:15:50 compute-1 nova_compute[230183]: 2025-11-23 21:15:50.585 230187 DEBUG nova.compute.manager [req-af1d7d1e-2630-4c55-b3f5-37d9abe8dcee req-e6298979-66b6-44ec-b74f-c3e8e13eba9d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Received event network-changed-deb2e9cc-993f-4f9a-934e-0921fdf22170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:15:50 compute-1 nova_compute[230183]: 2025-11-23 21:15:50.586 230187 DEBUG nova.compute.manager [req-af1d7d1e-2630-4c55-b3f5-37d9abe8dcee req-e6298979-66b6-44ec-b74f-c3e8e13eba9d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Refreshing instance network info cache due to event network-changed-deb2e9cc-993f-4f9a-934e-0921fdf22170. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 23 21:15:50 compute-1 nova_compute[230183]: 2025-11-23 21:15:50.586 230187 DEBUG oslo_concurrency.lockutils [req-af1d7d1e-2630-4c55-b3f5-37d9abe8dcee req-e6298979-66b6-44ec-b74f-c3e8e13eba9d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:15:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:50.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:51.072 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:15:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:51.072 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:15:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:51.073 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:15:51 compute-1 nova_compute[230183]: 2025-11-23 21:15:51.266 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:51 compute-1 nova_compute[230183]: 2025-11-23 21:15:51.341 230187 DEBUG nova.network.neutron [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 23 21:15:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:51.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:51 compute-1 ceph-mon[80135]: pgmap v1033: 337 pgs: 337 active+clean; 163 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 156 op/s
Nov 23 21:15:52 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:15:52 compute-1 nova_compute[230183]: 2025-11-23 21:15:52.506 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:15:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:52.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:15:52 compute-1 nova_compute[230183]: 2025-11-23 21:15:52.773 230187 DEBUG nova.network.neutron [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Updating instance_info_cache with network_info: [{"id": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "address": "fa:16:3e:19:cb:53", "network": {"id": "2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7", "bridge": "br-int", "label": "tempest-network-smoke--943664961", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb2e9cc-99", "ovs_interfaceid": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:15:52 compute-1 nova_compute[230183]: 2025-11-23 21:15:52.793 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Releasing lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:15:52 compute-1 nova_compute[230183]: 2025-11-23 21:15:52.794 230187 DEBUG nova.compute.manager [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Instance network_info: |[{"id": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "address": "fa:16:3e:19:cb:53", "network": {"id": "2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7", "bridge": "br-int", "label": "tempest-network-smoke--943664961", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb2e9cc-99", "ovs_interfaceid": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 23 21:15:52 compute-1 nova_compute[230183]: 2025-11-23 21:15:52.795 230187 DEBUG oslo_concurrency.lockutils [req-af1d7d1e-2630-4c55-b3f5-37d9abe8dcee req-e6298979-66b6-44ec-b74f-c3e8e13eba9d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:15:52 compute-1 nova_compute[230183]: 2025-11-23 21:15:52.795 230187 DEBUG nova.network.neutron [req-af1d7d1e-2630-4c55-b3f5-37d9abe8dcee req-e6298979-66b6-44ec-b74f-c3e8e13eba9d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Refreshing network info cache for port deb2e9cc-993f-4f9a-934e-0921fdf22170 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 23 21:15:52 compute-1 nova_compute[230183]: 2025-11-23 21:15:52.800 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Start _get_guest_xml network_info=[{"id": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "address": "fa:16:3e:19:cb:53", "network": {"id": "2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7", "bridge": "br-int", "label": "tempest-network-smoke--943664961", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb2e9cc-99", "ovs_interfaceid": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'guest_format': None, 'encrypted': False, 'size': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': '3c45fa6c-8a99-4359-a34e-d89f4e1e77d0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 23 21:15:52 compute-1 nova_compute[230183]: 2025-11-23 21:15:52.805 230187 WARNING nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:15:52 compute-1 nova_compute[230183]: 2025-11-23 21:15:52.815 230187 DEBUG nova.virt.libvirt.host [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 23 21:15:52 compute-1 nova_compute[230183]: 2025-11-23 21:15:52.815 230187 DEBUG nova.virt.libvirt.host [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 23 21:15:52 compute-1 nova_compute[230183]: 2025-11-23 21:15:52.819 230187 DEBUG nova.virt.libvirt.host [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 23 21:15:52 compute-1 nova_compute[230183]: 2025-11-23 21:15:52.820 230187 DEBUG nova.virt.libvirt.host [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 23 21:15:52 compute-1 nova_compute[230183]: 2025-11-23 21:15:52.820 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 23 21:15:52 compute-1 nova_compute[230183]: 2025-11-23 21:15:52.820 230187 DEBUG nova.virt.hardware [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-23T21:05:42Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='56044b93-2979-48aa-b67f-c37e1b489306',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 23 21:15:52 compute-1 nova_compute[230183]: 2025-11-23 21:15:52.821 230187 DEBUG nova.virt.hardware [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 23 21:15:52 compute-1 nova_compute[230183]: 2025-11-23 21:15:52.821 230187 DEBUG nova.virt.hardware [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 23 21:15:52 compute-1 nova_compute[230183]: 2025-11-23 21:15:52.822 230187 DEBUG nova.virt.hardware [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 23 21:15:52 compute-1 nova_compute[230183]: 2025-11-23 21:15:52.822 230187 DEBUG nova.virt.hardware [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 23 21:15:52 compute-1 nova_compute[230183]: 2025-11-23 21:15:52.822 230187 DEBUG nova.virt.hardware [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 23 21:15:52 compute-1 nova_compute[230183]: 2025-11-23 21:15:52.822 230187 DEBUG nova.virt.hardware [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 23 21:15:52 compute-1 nova_compute[230183]: 2025-11-23 21:15:52.823 230187 DEBUG nova.virt.hardware [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 23 21:15:52 compute-1 nova_compute[230183]: 2025-11-23 21:15:52.823 230187 DEBUG nova.virt.hardware [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 23 21:15:52 compute-1 nova_compute[230183]: 2025-11-23 21:15:52.823 230187 DEBUG nova.virt.hardware [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 23 21:15:52 compute-1 nova_compute[230183]: 2025-11-23 21:15:52.824 230187 DEBUG nova.virt.hardware [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 23 21:15:52 compute-1 nova_compute[230183]: 2025-11-23 21:15:52.826 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:15:53 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 21:15:53 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/775750339' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.279 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.318 230187 DEBUG nova.storage.rbd_utils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4384cda9-2a35-4df4-84b1-a045a41852ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.324 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:15:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:53.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:53 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 21:15:53 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1432727154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:15:53 compute-1 ceph-mon[80135]: pgmap v1034: 337 pgs: 337 active+clean; 163 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 323 KiB/s rd, 3.9 MiB/s wr, 82 op/s
Nov 23 21:15:53 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/775750339' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:15:53 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1432727154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.774 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.775 230187 DEBUG nova.virt.libvirt.vif [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:15:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1855303843',display_name='tempest-TestNetworkBasicOps-server-1855303843',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1855303843',id=12,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAycEnFV4AnrY6tCOqSabQ0TZJ55Jf3TdrBRrViOQ4YjFLRSLQxmifTjYTiV91MZtamqBqC7Pgt4UqC3q5yq6gNP1UI71Vl55q0bshrNqJ4oe/KPbzHMTwu1zmJ8/r6BYA==',key_name='tempest-TestNetworkBasicOps-403372706',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-g8xsviwh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:15:45Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=4384cda9-2a35-4df4-84b1-a045a41852ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "address": "fa:16:3e:19:cb:53", "network": {"id": "2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7", "bridge": "br-int", "label": "tempest-network-smoke--943664961", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb2e9cc-99", "ovs_interfaceid": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.776 230187 DEBUG nova.network.os_vif_util [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "address": "fa:16:3e:19:cb:53", "network": {"id": "2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7", "bridge": "br-int", "label": "tempest-network-smoke--943664961", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb2e9cc-99", "ovs_interfaceid": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.776 230187 DEBUG nova.network.os_vif_util [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:cb:53,bridge_name='br-int',has_traffic_filtering=True,id=deb2e9cc-993f-4f9a-934e-0921fdf22170,network=Network(2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb2e9cc-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.777 230187 DEBUG nova.objects.instance [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'pci_devices' on Instance uuid 4384cda9-2a35-4df4-84b1-a045a41852ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.791 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] End _get_guest_xml xml=<domain type="kvm">
Nov 23 21:15:53 compute-1 nova_compute[230183]:   <uuid>4384cda9-2a35-4df4-84b1-a045a41852ac</uuid>
Nov 23 21:15:53 compute-1 nova_compute[230183]:   <name>instance-0000000c</name>
Nov 23 21:15:53 compute-1 nova_compute[230183]:   <memory>131072</memory>
Nov 23 21:15:53 compute-1 nova_compute[230183]:   <vcpu>1</vcpu>
Nov 23 21:15:53 compute-1 nova_compute[230183]:   <metadata>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <nova:name>tempest-TestNetworkBasicOps-server-1855303843</nova:name>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <nova:creationTime>2025-11-23 21:15:52</nova:creationTime>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <nova:flavor name="m1.nano">
Nov 23 21:15:53 compute-1 nova_compute[230183]:         <nova:memory>128</nova:memory>
Nov 23 21:15:53 compute-1 nova_compute[230183]:         <nova:disk>1</nova:disk>
Nov 23 21:15:53 compute-1 nova_compute[230183]:         <nova:swap>0</nova:swap>
Nov 23 21:15:53 compute-1 nova_compute[230183]:         <nova:ephemeral>0</nova:ephemeral>
Nov 23 21:15:53 compute-1 nova_compute[230183]:         <nova:vcpus>1</nova:vcpus>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       </nova:flavor>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <nova:owner>
Nov 23 21:15:53 compute-1 nova_compute[230183]:         <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 21:15:53 compute-1 nova_compute[230183]:         <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       </nova:owner>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <nova:ports>
Nov 23 21:15:53 compute-1 nova_compute[230183]:         <nova:port uuid="deb2e9cc-993f-4f9a-934e-0921fdf22170">
Nov 23 21:15:53 compute-1 nova_compute[230183]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:         </nova:port>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       </nova:ports>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     </nova:instance>
Nov 23 21:15:53 compute-1 nova_compute[230183]:   </metadata>
Nov 23 21:15:53 compute-1 nova_compute[230183]:   <sysinfo type="smbios">
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <system>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <entry name="manufacturer">RDO</entry>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <entry name="product">OpenStack Compute</entry>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <entry name="serial">4384cda9-2a35-4df4-84b1-a045a41852ac</entry>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <entry name="uuid">4384cda9-2a35-4df4-84b1-a045a41852ac</entry>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <entry name="family">Virtual Machine</entry>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     </system>
Nov 23 21:15:53 compute-1 nova_compute[230183]:   </sysinfo>
Nov 23 21:15:53 compute-1 nova_compute[230183]:   <os>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <boot dev="hd"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <smbios mode="sysinfo"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:   </os>
Nov 23 21:15:53 compute-1 nova_compute[230183]:   <features>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <acpi/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <apic/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <vmcoreinfo/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:   </features>
Nov 23 21:15:53 compute-1 nova_compute[230183]:   <clock offset="utc">
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <timer name="pit" tickpolicy="delay"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <timer name="hpet" present="no"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:   </clock>
Nov 23 21:15:53 compute-1 nova_compute[230183]:   <cpu mode="host-model" match="exact">
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <topology sockets="1" cores="1" threads="1"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:   </cpu>
Nov 23 21:15:53 compute-1 nova_compute[230183]:   <devices>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <disk type="network" device="disk">
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <driver type="raw" cache="none"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <source protocol="rbd" name="vms/4384cda9-2a35-4df4-84b1-a045a41852ac_disk">
Nov 23 21:15:53 compute-1 nova_compute[230183]:         <host name="192.168.122.100" port="6789"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:         <host name="192.168.122.102" port="6789"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:         <host name="192.168.122.101" port="6789"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       </source>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <auth username="openstack">
Nov 23 21:15:53 compute-1 nova_compute[230183]:         <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <target dev="vda" bus="virtio"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <disk type="network" device="cdrom">
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <driver type="raw" cache="none"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <source protocol="rbd" name="vms/4384cda9-2a35-4df4-84b1-a045a41852ac_disk.config">
Nov 23 21:15:53 compute-1 nova_compute[230183]:         <host name="192.168.122.100" port="6789"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:         <host name="192.168.122.102" port="6789"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:         <host name="192.168.122.101" port="6789"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       </source>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <auth username="openstack">
Nov 23 21:15:53 compute-1 nova_compute[230183]:         <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <target dev="sda" bus="sata"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <interface type="ethernet">
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <mac address="fa:16:3e:19:cb:53"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <model type="virtio"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <driver name="vhost" rx_queue_size="512"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <mtu size="1442"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <target dev="tapdeb2e9cc-99"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     </interface>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <serial type="pty">
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <log file="/var/lib/nova/instances/4384cda9-2a35-4df4-84b1-a045a41852ac/console.log" append="off"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     </serial>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <video>
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <model type="virtio"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     </video>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <input type="tablet" bus="usb"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <rng model="virtio">
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <backend model="random">/dev/urandom</backend>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     </rng>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <controller type="usb" index="0"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     <memballoon model="virtio">
Nov 23 21:15:53 compute-1 nova_compute[230183]:       <stats period="10"/>
Nov 23 21:15:53 compute-1 nova_compute[230183]:     </memballoon>
Nov 23 21:15:53 compute-1 nova_compute[230183]:   </devices>
Nov 23 21:15:53 compute-1 nova_compute[230183]: </domain>
Nov 23 21:15:53 compute-1 nova_compute[230183]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.791 230187 DEBUG nova.compute.manager [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Preparing to wait for external event network-vif-plugged-deb2e9cc-993f-4f9a-934e-0921fdf22170 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.791 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.792 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.792 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.792 230187 DEBUG nova.virt.libvirt.vif [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:15:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1855303843',display_name='tempest-TestNetworkBasicOps-server-1855303843',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1855303843',id=12,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAycEnFV4AnrY6tCOqSabQ0TZJ55Jf3TdrBRrViOQ4YjFLRSLQxmifTjYTiV91MZtamqBqC7Pgt4UqC3q5yq6gNP1UI71Vl55q0bshrNqJ4oe/KPbzHMTwu1zmJ8/r6BYA==',key_name='tempest-TestNetworkBasicOps-403372706',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-g8xsviwh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:15:45Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=4384cda9-2a35-4df4-84b1-a045a41852ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "address": "fa:16:3e:19:cb:53", "network": {"id": "2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7", "bridge": "br-int", "label": "tempest-network-smoke--943664961", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb2e9cc-99", "ovs_interfaceid": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.792 230187 DEBUG nova.network.os_vif_util [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "address": "fa:16:3e:19:cb:53", "network": {"id": "2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7", "bridge": "br-int", "label": "tempest-network-smoke--943664961", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb2e9cc-99", "ovs_interfaceid": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.793 230187 DEBUG nova.network.os_vif_util [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:cb:53,bridge_name='br-int',has_traffic_filtering=True,id=deb2e9cc-993f-4f9a-934e-0921fdf22170,network=Network(2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb2e9cc-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.793 230187 DEBUG os_vif [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:cb:53,bridge_name='br-int',has_traffic_filtering=True,id=deb2e9cc-993f-4f9a-934e-0921fdf22170,network=Network(2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb2e9cc-99') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.794 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.794 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.794 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.797 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.798 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdeb2e9cc-99, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.798 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdeb2e9cc-99, col_values=(('external_ids', {'iface-id': 'deb2e9cc-993f-4f9a-934e-0921fdf22170', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:cb:53', 'vm-uuid': '4384cda9-2a35-4df4-84b1-a045a41852ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.800 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:53 compute-1 NetworkManager[49021]: <info>  [1763932553.8009] manager: (tapdeb2e9cc-99): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.802 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.807 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.807 230187 INFO os_vif [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:cb:53,bridge_name='br-int',has_traffic_filtering=True,id=deb2e9cc-993f-4f9a-934e-0921fdf22170,network=Network(2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb2e9cc-99')
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.857 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.857 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.857 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No VIF found with MAC fa:16:3e:19:cb:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.858 230187 INFO nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Using config drive
Nov 23 21:15:53 compute-1 nova_compute[230183]: 2025-11-23 21:15:53.879 230187 DEBUG nova.storage.rbd_utils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4384cda9-2a35-4df4-84b1-a045a41852ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:15:54 compute-1 nova_compute[230183]: 2025-11-23 21:15:54.379 230187 DEBUG nova.network.neutron [req-af1d7d1e-2630-4c55-b3f5-37d9abe8dcee req-e6298979-66b6-44ec-b74f-c3e8e13eba9d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Updated VIF entry in instance network info cache for port deb2e9cc-993f-4f9a-934e-0921fdf22170. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 23 21:15:54 compute-1 nova_compute[230183]: 2025-11-23 21:15:54.379 230187 DEBUG nova.network.neutron [req-af1d7d1e-2630-4c55-b3f5-37d9abe8dcee req-e6298979-66b6-44ec-b74f-c3e8e13eba9d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Updating instance_info_cache with network_info: [{"id": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "address": "fa:16:3e:19:cb:53", "network": {"id": "2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7", "bridge": "br-int", "label": "tempest-network-smoke--943664961", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb2e9cc-99", "ovs_interfaceid": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:15:54 compute-1 nova_compute[230183]: 2025-11-23 21:15:54.394 230187 DEBUG oslo_concurrency.lockutils [req-af1d7d1e-2630-4c55-b3f5-37d9abe8dcee req-e6298979-66b6-44ec-b74f-c3e8e13eba9d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:15:54 compute-1 nova_compute[230183]: 2025-11-23 21:15:54.539 230187 INFO nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Creating config drive at /var/lib/nova/instances/4384cda9-2a35-4df4-84b1-a045a41852ac/disk.config
Nov 23 21:15:54 compute-1 nova_compute[230183]: 2025-11-23 21:15:54.544 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4384cda9-2a35-4df4-84b1-a045a41852ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6wst1rus execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:15:54 compute-1 nova_compute[230183]: 2025-11-23 21:15:54.667 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4384cda9-2a35-4df4-84b1-a045a41852ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6wst1rus" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:15:54 compute-1 nova_compute[230183]: 2025-11-23 21:15:54.694 230187 DEBUG nova.storage.rbd_utils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4384cda9-2a35-4df4-84b1-a045a41852ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:15:54 compute-1 nova_compute[230183]: 2025-11-23 21:15:54.697 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4384cda9-2a35-4df4-84b1-a045a41852ac/disk.config 4384cda9-2a35-4df4-84b1-a045a41852ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:15:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:54.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:54 compute-1 ceph-mon[80135]: pgmap v1035: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Nov 23 21:15:54 compute-1 nova_compute[230183]: 2025-11-23 21:15:54.869 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4384cda9-2a35-4df4-84b1-a045a41852ac/disk.config 4384cda9-2a35-4df4-84b1-a045a41852ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:15:54 compute-1 nova_compute[230183]: 2025-11-23 21:15:54.870 230187 INFO nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Deleting local config drive /var/lib/nova/instances/4384cda9-2a35-4df4-84b1-a045a41852ac/disk.config because it was imported into RBD.
Nov 23 21:15:54 compute-1 kernel: tapdeb2e9cc-99: entered promiscuous mode
Nov 23 21:15:54 compute-1 NetworkManager[49021]: <info>  [1763932554.9108] manager: (tapdeb2e9cc-99): new Tun device (/org/freedesktop/NetworkManager/Devices/72)
Nov 23 21:15:54 compute-1 ovn_controller[132845]: 2025-11-23T21:15:54Z|00115|binding|INFO|Claiming lport deb2e9cc-993f-4f9a-934e-0921fdf22170 for this chassis.
Nov 23 21:15:54 compute-1 nova_compute[230183]: 2025-11-23 21:15:54.910 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:54 compute-1 ovn_controller[132845]: 2025-11-23T21:15:54Z|00116|binding|INFO|deb2e9cc-993f-4f9a-934e-0921fdf22170: Claiming fa:16:3e:19:cb:53 10.100.0.14
Nov 23 21:15:54 compute-1 nova_compute[230183]: 2025-11-23 21:15:54.916 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:54 compute-1 nova_compute[230183]: 2025-11-23 21:15:54.918 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:54 compute-1 nova_compute[230183]: 2025-11-23 21:15:54.923 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:54 compute-1 nova_compute[230183]: 2025-11-23 21:15:54.924 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:54 compute-1 NetworkManager[49021]: <info>  [1763932554.9266] manager: (patch-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Nov 23 21:15:54 compute-1 NetworkManager[49021]: <info>  [1763932554.9272] manager: (patch-br-int-to-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Nov 23 21:15:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:54.929 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:cb:53 10.100.0.14'], port_security=['fa:16:3e:19:cb:53 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4384cda9-2a35-4df4-84b1-a045a41852ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4b48e986-896c-496c-81ed-a29a0452333b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=831ed7cd-9739-4cae-9853-0a7c3c8eb72f, chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=deb2e9cc-993f-4f9a-934e-0921fdf22170) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:15:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:54.930 142158 INFO neutron.agent.ovn.metadata.agent [-] Port deb2e9cc-993f-4f9a-934e-0921fdf22170 in datapath 2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7 bound to our chassis
Nov 23 21:15:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:54.931 142158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7
Nov 23 21:15:54 compute-1 systemd-machined[193469]: New machine qemu-7-instance-0000000c.
Nov 23 21:15:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:54.941 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[436a0b9e-b3ef-40ad-904a-8d97c8539b85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:15:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:54.942 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b2cbb2b-41 in ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 23 21:15:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:54.943 233901 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b2cbb2b-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 23 21:15:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:54.943 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[7914774e-451f-4b16-a867-b1fe72c4f05b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:15:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:54.944 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[25995d34-1d05-4d96-b6d6-b895f8cf9def]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:15:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:54.957 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[20825745-9244-4681-95f7-c783513f1b82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:15:54 compute-1 systemd[1]: Started Virtual Machine qemu-7-instance-0000000c.
Nov 23 21:15:54 compute-1 systemd-udevd[243108]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 21:15:54 compute-1 NetworkManager[49021]: <info>  [1763932554.9789] device (tapdeb2e9cc-99): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 21:15:54 compute-1 NetworkManager[49021]: <info>  [1763932554.9800] device (tapdeb2e9cc-99): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 23 21:15:54 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:54.981 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[cddcde1f-2721-4e52-b9c5-588cf776eb83]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.006 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.006 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[3d4b6baa-a916-4349-a82a-69b63c01c781]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.013 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:55 compute-1 NetworkManager[49021]: <info>  [1763932555.0193] manager: (tap2b2cbb2b-40): new Veth device (/org/freedesktop/NetworkManager/Devices/75)
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.019 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5413d493-3ab7-4ffc-9543-a3f05b2c390e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:15:55 compute-1 ovn_controller[132845]: 2025-11-23T21:15:55Z|00117|binding|INFO|Setting lport deb2e9cc-993f-4f9a-934e-0921fdf22170 ovn-installed in OVS
Nov 23 21:15:55 compute-1 ovn_controller[132845]: 2025-11-23T21:15:55Z|00118|binding|INFO|Setting lport deb2e9cc-993f-4f9a-934e-0921fdf22170 up in Southbound
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.027 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.047 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb35861-23e0-4cd1-a838-cd9ba058e76f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.049 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[fe324592-316b-43bc-8d69-9b0a8f198701]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:15:55 compute-1 NetworkManager[49021]: <info>  [1763932555.0673] device (tap2b2cbb2b-40): carrier: link connected
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.071 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[c63c9a68-1897-4915-841d-e5ac4ecbcab2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.085 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[2181b6e8-c861-459d-8fa7-1ef75d80df64]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b2cbb2b-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:f6:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450363, 'reachable_time': 35564, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243138, 'error': None, 'target': 'ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.100 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[405e5244-489e-4427-8556-ef919c076cc5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea2:f637'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 450363, 'tstamp': 450363}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243139, 'error': None, 'target': 'ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.112 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb8d225-92aa-4b2c-b986-4498fd789a9a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b2cbb2b-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:f6:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450363, 'reachable_time': 35564, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 243140, 'error': None, 'target': 'ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.140 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[4705223b-f14b-43fb-aae5-4945bb46f42a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.191 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[ed237c63-3025-42f2-9613-d87dc71a8e60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.192 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b2cbb2b-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.192 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.192 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b2cbb2b-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:15:55 compute-1 kernel: tap2b2cbb2b-40: entered promiscuous mode
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.194 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:55 compute-1 NetworkManager[49021]: <info>  [1763932555.1960] manager: (tap2b2cbb2b-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.198 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.199 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b2cbb2b-40, col_values=(('external_ids', {'iface-id': '7a9e60a2-aaf5-412e-8508-c425a028014e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.200 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:55 compute-1 ovn_controller[132845]: 2025-11-23T21:15:55Z|00119|binding|INFO|Releasing lport 7a9e60a2-aaf5-412e-8508-c425a028014e from this chassis (sb_readonly=0)
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.200 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.201 142158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.202 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[35f099f9-c01e-4ebd-a42a-1e42b4a6d906]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.203 142158 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]: global
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]:     log         /dev/log local0 debug
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]:     log-tag     haproxy-metadata-proxy-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]:     user        root
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]:     group       root
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]:     maxconn     1024
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]:     pidfile     /var/lib/neutron/external/pids/2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7.pid.haproxy
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]:     daemon
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]: defaults
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]:     log global
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]:     mode http
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]:     option httplog
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]:     option dontlognull
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]:     option http-server-close
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]:     option forwardfor
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]:     retries                 3
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]:     timeout http-request    30s
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]:     timeout connect         30s
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]:     timeout client          32s
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]:     timeout server          32s
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]:     timeout http-keep-alive 30s
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]: listen listener
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]:     bind 169.254.169.254:80
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]:     server metadata /var/lib/neutron/metadata_proxy
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]:     http-request add-header X-OVN-Network-ID 2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 23 21:15:55 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.204 142158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7', 'env', 'PROCESS_TAG=haproxy-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.212 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.468 230187 DEBUG nova.compute.manager [req-f4620e74-8350-4a9b-bdb7-01d47cc8d78a req-280107a9-398d-4a73-ba39-e1ba058e783f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Received event network-vif-plugged-deb2e9cc-993f-4f9a-934e-0921fdf22170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.468 230187 DEBUG oslo_concurrency.lockutils [req-f4620e74-8350-4a9b-bdb7-01d47cc8d78a req-280107a9-398d-4a73-ba39-e1ba058e783f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.468 230187 DEBUG oslo_concurrency.lockutils [req-f4620e74-8350-4a9b-bdb7-01d47cc8d78a req-280107a9-398d-4a73-ba39-e1ba058e783f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.469 230187 DEBUG oslo_concurrency.lockutils [req-f4620e74-8350-4a9b-bdb7-01d47cc8d78a req-280107a9-398d-4a73-ba39-e1ba058e783f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.469 230187 DEBUG nova.compute.manager [req-f4620e74-8350-4a9b-bdb7-01d47cc8d78a req-280107a9-398d-4a73-ba39-e1ba058e783f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Processing event network-vif-plugged-deb2e9cc-993f-4f9a-934e-0921fdf22170 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 23 21:15:55 compute-1 podman[243172]: 2025-11-23 21:15:55.537964837 +0000 UTC m=+0.051331941 container create 5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 23 21:15:55 compute-1 systemd[1]: Started libpod-conmon-5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0.scope.
Nov 23 21:15:55 compute-1 systemd[1]: Started libcrun container.
Nov 23 21:15:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18067dbd2039a8291e31d1524e9c7847c294eb0b15485a8b90bbade1b71fdea0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 21:15:55 compute-1 podman[243172]: 2025-11-23 21:15:55.509023524 +0000 UTC m=+0.022390658 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 21:15:55 compute-1 podman[243172]: 2025-11-23 21:15:55.609187359 +0000 UTC m=+0.122554483 container init 5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 21:15:55 compute-1 podman[243172]: 2025-11-23 21:15:55.615028844 +0000 UTC m=+0.128395948 container start 5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 23 21:15:55 compute-1 neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7[243223]: [NOTICE]   (243233) : New worker (243235) forked
Nov 23 21:15:55 compute-1 neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7[243223]: [NOTICE]   (243233) : Loading success.
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.676 230187 DEBUG nova.compute.manager [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.677 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932555.6761324, 4384cda9-2a35-4df4-84b1-a045a41852ac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.677 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] VM Started (Lifecycle Event)
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.679 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.682 230187 INFO nova.virt.libvirt.driver [-] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Instance spawned successfully.
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.682 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.704 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.708 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.711 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.711 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.712 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.712 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.712 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.713 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:15:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:55.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.741 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.741 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932555.6773012, 4384cda9-2a35-4df4-84b1-a045a41852ac => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.741 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] VM Paused (Lifecycle Event)
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.765 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.768 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932555.679522, 4384cda9-2a35-4df4-84b1-a045a41852ac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.768 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] VM Resumed (Lifecycle Event)
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.782 230187 INFO nova.compute.manager [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Took 10.34 seconds to spawn the instance on the hypervisor.
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.782 230187 DEBUG nova.compute.manager [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.804 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.806 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.834 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.859 230187 INFO nova.compute.manager [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Took 11.27 seconds to build instance.
Nov 23 21:15:55 compute-1 nova_compute[230183]: 2025-11-23 21:15:55.877 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:15:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:15:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:56.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:15:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:15:57 compute-1 nova_compute[230183]: 2025-11-23 21:15:57.510 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:57 compute-1 nova_compute[230183]: 2025-11-23 21:15:57.550 230187 DEBUG nova.compute.manager [req-c04ff297-64bc-428e-a92d-4336cb4bea21 req-fbb8b70d-1fa2-48a7-b23c-980837237a34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Received event network-vif-plugged-deb2e9cc-993f-4f9a-934e-0921fdf22170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:15:57 compute-1 nova_compute[230183]: 2025-11-23 21:15:57.551 230187 DEBUG oslo_concurrency.lockutils [req-c04ff297-64bc-428e-a92d-4336cb4bea21 req-fbb8b70d-1fa2-48a7-b23c-980837237a34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:15:57 compute-1 nova_compute[230183]: 2025-11-23 21:15:57.551 230187 DEBUG oslo_concurrency.lockutils [req-c04ff297-64bc-428e-a92d-4336cb4bea21 req-fbb8b70d-1fa2-48a7-b23c-980837237a34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:15:57 compute-1 nova_compute[230183]: 2025-11-23 21:15:57.551 230187 DEBUG oslo_concurrency.lockutils [req-c04ff297-64bc-428e-a92d-4336cb4bea21 req-fbb8b70d-1fa2-48a7-b23c-980837237a34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:15:57 compute-1 nova_compute[230183]: 2025-11-23 21:15:57.551 230187 DEBUG nova.compute.manager [req-c04ff297-64bc-428e-a92d-4336cb4bea21 req-fbb8b70d-1fa2-48a7-b23c-980837237a34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] No waiting events found dispatching network-vif-plugged-deb2e9cc-993f-4f9a-934e-0921fdf22170 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:15:57 compute-1 nova_compute[230183]: 2025-11-23 21:15:57.551 230187 WARNING nova.compute.manager [req-c04ff297-64bc-428e-a92d-4336cb4bea21 req-fbb8b70d-1fa2-48a7-b23c-980837237a34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Received unexpected event network-vif-plugged-deb2e9cc-993f-4f9a-934e-0921fdf22170 for instance with vm_state active and task_state None.
Nov 23 21:15:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:15:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:57.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:15:57 compute-1 ceph-mon[80135]: pgmap v1036: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Nov 23 21:15:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:15:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:58.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:15:58 compute-1 sshd-session[243246]: Invalid user sol from 161.35.133.66 port 36240
Nov 23 21:15:58 compute-1 sshd-session[243246]: Connection closed by invalid user sol 161.35.133.66 port 36240 [preauth]
Nov 23 21:15:58 compute-1 nova_compute[230183]: 2025-11-23 21:15:58.800 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:15:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:15:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:15:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:59.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:15:59 compute-1 ceph-mon[80135]: pgmap v1037: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Nov 23 21:16:00 compute-1 podman[243250]: 2025-11-23 21:16:00.672286694 +0000 UTC m=+0.069984429 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 21:16:00 compute-1 podman[243249]: 2025-11-23 21:16:00.737071973 +0000 UTC m=+0.146304066 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 21:16:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:16:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:00.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:16:00 compute-1 ceph-mon[80135]: pgmap v1038: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Nov 23 21:16:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:16:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:01.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:16:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:16:02 compute-1 nova_compute[230183]: 2025-11-23 21:16:02.515 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:02.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:03 compute-1 nova_compute[230183]: 2025-11-23 21:16:03.251 230187 DEBUG nova.compute.manager [req-f33e3d4d-1f97-4d25-a58f-8d1c5c26caf6 req-452c60b0-3cb0-49bd-a288-d22184256254 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Received event network-changed-deb2e9cc-993f-4f9a-934e-0921fdf22170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:16:03 compute-1 nova_compute[230183]: 2025-11-23 21:16:03.251 230187 DEBUG nova.compute.manager [req-f33e3d4d-1f97-4d25-a58f-8d1c5c26caf6 req-452c60b0-3cb0-49bd-a288-d22184256254 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Refreshing instance network info cache due to event network-changed-deb2e9cc-993f-4f9a-934e-0921fdf22170. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 23 21:16:03 compute-1 nova_compute[230183]: 2025-11-23 21:16:03.252 230187 DEBUG oslo_concurrency.lockutils [req-f33e3d4d-1f97-4d25-a58f-8d1c5c26caf6 req-452c60b0-3cb0-49bd-a288-d22184256254 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:16:03 compute-1 nova_compute[230183]: 2025-11-23 21:16:03.252 230187 DEBUG oslo_concurrency.lockutils [req-f33e3d4d-1f97-4d25-a58f-8d1c5c26caf6 req-452c60b0-3cb0-49bd-a288-d22184256254 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:16:03 compute-1 nova_compute[230183]: 2025-11-23 21:16:03.252 230187 DEBUG nova.network.neutron [req-f33e3d4d-1f97-4d25-a58f-8d1c5c26caf6 req-452c60b0-3cb0-49bd-a288-d22184256254 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Refreshing network info cache for port deb2e9cc-993f-4f9a-934e-0921fdf22170 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 23 21:16:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:16:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:03.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:16:03 compute-1 ceph-mon[80135]: pgmap v1039: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 50 KiB/s wr, 84 op/s
Nov 23 21:16:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:16:03 compute-1 nova_compute[230183]: 2025-11-23 21:16:03.803 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:04 compute-1 nova_compute[230183]: 2025-11-23 21:16:04.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:16:04 compute-1 nova_compute[230183]: 2025-11-23 21:16:04.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:16:04 compute-1 nova_compute[230183]: 2025-11-23 21:16:04.448 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:16:04 compute-1 nova_compute[230183]: 2025-11-23 21:16:04.449 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:16:04 compute-1 nova_compute[230183]: 2025-11-23 21:16:04.449 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:16:04 compute-1 nova_compute[230183]: 2025-11-23 21:16:04.450 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:16:04 compute-1 nova_compute[230183]: 2025-11-23 21:16:04.450 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:16:04 compute-1 nova_compute[230183]: 2025-11-23 21:16:04.575 230187 DEBUG nova.network.neutron [req-f33e3d4d-1f97-4d25-a58f-8d1c5c26caf6 req-452c60b0-3cb0-49bd-a288-d22184256254 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Updated VIF entry in instance network info cache for port deb2e9cc-993f-4f9a-934e-0921fdf22170. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 23 21:16:04 compute-1 nova_compute[230183]: 2025-11-23 21:16:04.577 230187 DEBUG nova.network.neutron [req-f33e3d4d-1f97-4d25-a58f-8d1c5c26caf6 req-452c60b0-3cb0-49bd-a288-d22184256254 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Updating instance_info_cache with network_info: [{"id": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "address": "fa:16:3e:19:cb:53", "network": {"id": "2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7", "bridge": "br-int", "label": "tempest-network-smoke--943664961", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb2e9cc-99", "ovs_interfaceid": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:16:04 compute-1 nova_compute[230183]: 2025-11-23 21:16:04.597 230187 DEBUG oslo_concurrency.lockutils [req-f33e3d4d-1f97-4d25-a58f-8d1c5c26caf6 req-452c60b0-3cb0-49bd-a288-d22184256254 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:16:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:16:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:04.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:16:04 compute-1 nova_compute[230183]: 2025-11-23 21:16:04.899 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:16:04 compute-1 nova_compute[230183]: 2025-11-23 21:16:04.961 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 23 21:16:04 compute-1 nova_compute[230183]: 2025-11-23 21:16:04.962 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 23 21:16:05 compute-1 podman[243316]: 2025-11-23 21:16:05.006707328 +0000 UTC m=+0.063888577 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 23 21:16:05 compute-1 nova_compute[230183]: 2025-11-23 21:16:05.101 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:16:05 compute-1 nova_compute[230183]: 2025-11-23 21:16:05.102 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4707MB free_disk=59.92185592651367GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:16:05 compute-1 nova_compute[230183]: 2025-11-23 21:16:05.103 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:16:05 compute-1 nova_compute[230183]: 2025-11-23 21:16:05.103 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:16:05 compute-1 nova_compute[230183]: 2025-11-23 21:16:05.168 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Instance 4384cda9-2a35-4df4-84b1-a045a41852ac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 23 21:16:05 compute-1 nova_compute[230183]: 2025-11-23 21:16:05.168 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:16:05 compute-1 nova_compute[230183]: 2025-11-23 21:16:05.168 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:16:05 compute-1 nova_compute[230183]: 2025-11-23 21:16:05.242 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:16:05 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:16:05 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4169750381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:16:05 compute-1 nova_compute[230183]: 2025-11-23 21:16:05.700 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:16:05 compute-1 nova_compute[230183]: 2025-11-23 21:16:05.706 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:16:05 compute-1 nova_compute[230183]: 2025-11-23 21:16:05.722 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:16:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:05.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:05 compute-1 nova_compute[230183]: 2025-11-23 21:16:05.749 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 23 21:16:05 compute-1 nova_compute[230183]: 2025-11-23 21:16:05.750 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:16:05 compute-1 ceph-mon[80135]: pgmap v1040: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 50 KiB/s wr, 85 op/s
Nov 23 21:16:05 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2330035066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:16:05 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/4169750381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:16:06 compute-1 sudo[243360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:16:06 compute-1 sudo[243360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:16:06 compute-1 sudo[243360]: pam_unix(sudo:session): session closed for user root
Nov 23 21:16:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:16:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:06.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:16:06 compute-1 nova_compute[230183]: 2025-11-23 21:16:06.749 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:16:07 compute-1 ceph-mon[80135]: pgmap v1041: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 75 op/s
Nov 23 21:16:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:16:07 compute-1 nova_compute[230183]: 2025-11-23 21:16:07.422 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:16:07 compute-1 nova_compute[230183]: 2025-11-23 21:16:07.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:16:07 compute-1 nova_compute[230183]: 2025-11-23 21:16:07.518 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:07.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 21:16:07 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2402619270' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:16:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 21:16:07 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2402619270' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:16:08 compute-1 nova_compute[230183]: 2025-11-23 21:16:08.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:16:08 compute-1 nova_compute[230183]: 2025-11-23 21:16:08.426 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 23 21:16:08 compute-1 nova_compute[230183]: 2025-11-23 21:16:08.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 23 21:16:08 compute-1 nova_compute[230183]: 2025-11-23 21:16:08.590 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:16:08 compute-1 nova_compute[230183]: 2025-11-23 21:16:08.590 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquired lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:16:08 compute-1 nova_compute[230183]: 2025-11-23 21:16:08.591 230187 DEBUG nova.network.neutron [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 23 21:16:08 compute-1 nova_compute[230183]: 2025-11-23 21:16:08.591 230187 DEBUG nova.objects.instance [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4384cda9-2a35-4df4-84b1-a045a41852ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:16:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/2402619270' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:16:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/2402619270' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:16:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:08.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:08 compute-1 nova_compute[230183]: 2025-11-23 21:16:08.805 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:16:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:09.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:16:09 compute-1 ceph-mon[80135]: pgmap v1042: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 75 op/s
Nov 23 21:16:10 compute-1 ovn_controller[132845]: 2025-11-23T21:16:10Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:19:cb:53 10.100.0.14
Nov 23 21:16:10 compute-1 ovn_controller[132845]: 2025-11-23T21:16:10Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:19:cb:53 10.100.0.14
Nov 23 21:16:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:16:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:10.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:16:10 compute-1 ceph-mon[80135]: pgmap v1043: 337 pgs: 337 active+clean; 188 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Nov 23 21:16:11 compute-1 nova_compute[230183]: 2025-11-23 21:16:11.350 230187 DEBUG nova.network.neutron [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Updating instance_info_cache with network_info: [{"id": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "address": "fa:16:3e:19:cb:53", "network": {"id": "2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7", "bridge": "br-int", "label": "tempest-network-smoke--943664961", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb2e9cc-99", "ovs_interfaceid": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:16:11 compute-1 nova_compute[230183]: 2025-11-23 21:16:11.365 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Releasing lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:16:11 compute-1 nova_compute[230183]: 2025-11-23 21:16:11.365 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 23 21:16:11 compute-1 nova_compute[230183]: 2025-11-23 21:16:11.366 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:16:11 compute-1 nova_compute[230183]: 2025-11-23 21:16:11.366 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:16:11 compute-1 nova_compute[230183]: 2025-11-23 21:16:11.366 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 23 21:16:11 compute-1 nova_compute[230183]: 2025-11-23 21:16:11.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:16:11 compute-1 nova_compute[230183]: 2025-11-23 21:16:11.447 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:16:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:11.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:11 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1010952153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:16:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:16:12 compute-1 nova_compute[230183]: 2025-11-23 21:16:12.520 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:12.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:12 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/62087560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:16:12 compute-1 ceph-mon[80135]: pgmap v1044: 337 pgs: 337 active+clean; 188 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 204 KiB/s rd, 2.0 MiB/s wr, 41 op/s
Nov 23 21:16:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:13.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:13 compute-1 nova_compute[230183]: 2025-11-23 21:16:13.808 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:14.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:16:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:15.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:16:15 compute-1 ceph-mon[80135]: pgmap v1045: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 294 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 23 21:16:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:16:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:16.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:16:17 compute-1 ceph-mon[80135]: pgmap v1046: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 294 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 23 21:16:17 compute-1 sudo[243390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:16:17 compute-1 sudo[243390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:16:17 compute-1 sudo[243390]: pam_unix(sudo:session): session closed for user root
Nov 23 21:16:17 compute-1 sudo[243415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 21:16:17 compute-1 sudo[243415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:16:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:16:17 compute-1 nova_compute[230183]: 2025-11-23 21:16:17.522 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:16:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:17.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:16:17 compute-1 sudo[243415]: pam_unix(sudo:session): session closed for user root
Nov 23 21:16:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:16:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:18.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:18 compute-1 nova_compute[230183]: 2025-11-23 21:16:18.809 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:19 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1724600131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:16:19 compute-1 ceph-mon[80135]: pgmap v1047: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 294 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 23 21:16:19 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3473381753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:16:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:19.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:20 compute-1 nova_compute[230183]: 2025-11-23 21:16:20.618 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:20 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:16:20.619 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:16:20 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:16:20.621 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 23 21:16:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:16:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:20.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:16:21 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:16:21 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:16:21 compute-1 ceph-mon[80135]: pgmap v1048: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 294 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 23 21:16:21 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:16:21 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 21:16:21 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:16:21 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:16:21 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 21:16:21 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 21:16:21 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:16:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:21.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:22 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:16:22 compute-1 ceph-mon[80135]: pgmap v1049: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 123 KiB/s wr, 24 op/s
Nov 23 21:16:22 compute-1 nova_compute[230183]: 2025-11-23 21:16:22.543 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:22.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:23.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:23 compute-1 nova_compute[230183]: 2025-11-23 21:16:23.812 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:24 compute-1 ceph-mon[80135]: pgmap v1050: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 123 KiB/s wr, 24 op/s
Nov 23 21:16:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:24.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:24 compute-1 nova_compute[230183]: 2025-11-23 21:16:24.874 230187 DEBUG nova.compute.manager [req-d949908f-d638-426d-a6d8-4c3482289077 req-61db7f0f-ce0b-4efc-bc36-f52e62bd738c 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Received event network-changed-deb2e9cc-993f-4f9a-934e-0921fdf22170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:16:24 compute-1 nova_compute[230183]: 2025-11-23 21:16:24.875 230187 DEBUG nova.compute.manager [req-d949908f-d638-426d-a6d8-4c3482289077 req-61db7f0f-ce0b-4efc-bc36-f52e62bd738c 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Refreshing instance network info cache due to event network-changed-deb2e9cc-993f-4f9a-934e-0921fdf22170. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 23 21:16:24 compute-1 nova_compute[230183]: 2025-11-23 21:16:24.875 230187 DEBUG oslo_concurrency.lockutils [req-d949908f-d638-426d-a6d8-4c3482289077 req-61db7f0f-ce0b-4efc-bc36-f52e62bd738c 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:16:24 compute-1 nova_compute[230183]: 2025-11-23 21:16:24.876 230187 DEBUG oslo_concurrency.lockutils [req-d949908f-d638-426d-a6d8-4c3482289077 req-61db7f0f-ce0b-4efc-bc36-f52e62bd738c 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:16:24 compute-1 nova_compute[230183]: 2025-11-23 21:16:24.876 230187 DEBUG nova.network.neutron [req-d949908f-d638-426d-a6d8-4c3482289077 req-61db7f0f-ce0b-4efc-bc36-f52e62bd738c 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Refreshing network info cache for port deb2e9cc-993f-4f9a-934e-0921fdf22170 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 23 21:16:24 compute-1 nova_compute[230183]: 2025-11-23 21:16:24.991 230187 DEBUG oslo_concurrency.lockutils [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "4384cda9-2a35-4df4-84b1-a045a41852ac" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:16:24 compute-1 nova_compute[230183]: 2025-11-23 21:16:24.992 230187 DEBUG oslo_concurrency.lockutils [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:16:24 compute-1 nova_compute[230183]: 2025-11-23 21:16:24.992 230187 DEBUG oslo_concurrency.lockutils [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:16:24 compute-1 nova_compute[230183]: 2025-11-23 21:16:24.992 230187 DEBUG oslo_concurrency.lockutils [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:16:24 compute-1 nova_compute[230183]: 2025-11-23 21:16:24.992 230187 DEBUG oslo_concurrency.lockutils [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:16:24 compute-1 nova_compute[230183]: 2025-11-23 21:16:24.993 230187 INFO nova.compute.manager [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Terminating instance
Nov 23 21:16:24 compute-1 nova_compute[230183]: 2025-11-23 21:16:24.994 230187 DEBUG nova.compute.manager [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 23 21:16:25 compute-1 kernel: tapdeb2e9cc-99 (unregistering): left promiscuous mode
Nov 23 21:16:25 compute-1 NetworkManager[49021]: <info>  [1763932585.0426] device (tapdeb2e9cc-99): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 23 21:16:25 compute-1 ovn_controller[132845]: 2025-11-23T21:16:25Z|00120|binding|INFO|Releasing lport deb2e9cc-993f-4f9a-934e-0921fdf22170 from this chassis (sb_readonly=0)
Nov 23 21:16:25 compute-1 ovn_controller[132845]: 2025-11-23T21:16:25Z|00121|binding|INFO|Setting lport deb2e9cc-993f-4f9a-934e-0921fdf22170 down in Southbound
Nov 23 21:16:25 compute-1 nova_compute[230183]: 2025-11-23 21:16:25.051 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:25 compute-1 ovn_controller[132845]: 2025-11-23T21:16:25Z|00122|binding|INFO|Removing iface tapdeb2e9cc-99 ovn-installed in OVS
Nov 23 21:16:25 compute-1 nova_compute[230183]: 2025-11-23 21:16:25.053 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:25 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.062 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:cb:53 10.100.0.14'], port_security=['fa:16:3e:19:cb:53 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4384cda9-2a35-4df4-84b1-a045a41852ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4b48e986-896c-496c-81ed-a29a0452333b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=831ed7cd-9739-4cae-9853-0a7c3c8eb72f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=deb2e9cc-993f-4f9a-934e-0921fdf22170) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:16:25 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.065 142158 INFO neutron.agent.ovn.metadata.agent [-] Port deb2e9cc-993f-4f9a-934e-0921fdf22170 in datapath 2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7 unbound from our chassis
Nov 23 21:16:25 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.066 142158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 23 21:16:25 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.068 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[4d848597-89f0-4bf4-a4d0-8a9f5b196dd5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:16:25 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.068 142158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7 namespace which is not needed anymore
Nov 23 21:16:25 compute-1 nova_compute[230183]: 2025-11-23 21:16:25.072 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:25 compute-1 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 23 21:16:25 compute-1 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000c.scope: Consumed 14.278s CPU time.
Nov 23 21:16:25 compute-1 systemd-machined[193469]: Machine qemu-7-instance-0000000c terminated.
Nov 23 21:16:25 compute-1 neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7[243223]: [NOTICE]   (243233) : haproxy version is 2.8.14-c23fe91
Nov 23 21:16:25 compute-1 neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7[243223]: [NOTICE]   (243233) : path to executable is /usr/sbin/haproxy
Nov 23 21:16:25 compute-1 neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7[243223]: [WARNING]  (243233) : Exiting Master process...
Nov 23 21:16:25 compute-1 neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7[243223]: [ALERT]    (243233) : Current worker (243235) exited with code 143 (Terminated)
Nov 23 21:16:25 compute-1 neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7[243223]: [WARNING]  (243233) : All workers exited. Exiting... (0)
Nov 23 21:16:25 compute-1 systemd[1]: libpod-5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0.scope: Deactivated successfully.
Nov 23 21:16:25 compute-1 podman[243499]: 2025-11-23 21:16:25.193983781 +0000 UTC m=+0.039970208 container died 5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 21:16:25 compute-1 nova_compute[230183]: 2025-11-23 21:16:25.211 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:25 compute-1 nova_compute[230183]: 2025-11-23 21:16:25.215 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:25 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0-userdata-shm.mount: Deactivated successfully.
Nov 23 21:16:25 compute-1 nova_compute[230183]: 2025-11-23 21:16:25.224 230187 INFO nova.virt.libvirt.driver [-] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Instance destroyed successfully.
Nov 23 21:16:25 compute-1 nova_compute[230183]: 2025-11-23 21:16:25.224 230187 DEBUG nova.objects.instance [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'resources' on Instance uuid 4384cda9-2a35-4df4-84b1-a045a41852ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:16:25 compute-1 systemd[1]: var-lib-containers-storage-overlay-18067dbd2039a8291e31d1524e9c7847c294eb0b15485a8b90bbade1b71fdea0-merged.mount: Deactivated successfully.
Nov 23 21:16:25 compute-1 nova_compute[230183]: 2025-11-23 21:16:25.234 230187 DEBUG nova.virt.libvirt.vif [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:15:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1855303843',display_name='tempest-TestNetworkBasicOps-server-1855303843',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1855303843',id=12,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAycEnFV4AnrY6tCOqSabQ0TZJ55Jf3TdrBRrViOQ4YjFLRSLQxmifTjYTiV91MZtamqBqC7Pgt4UqC3q5yq6gNP1UI71Vl55q0bshrNqJ4oe/KPbzHMTwu1zmJ8/r6BYA==',key_name='tempest-TestNetworkBasicOps-403372706',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:15:55Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-g8xsviwh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:15:55Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=4384cda9-2a35-4df4-84b1-a045a41852ac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "address": "fa:16:3e:19:cb:53", "network": {"id": "2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7", "bridge": "br-int", "label": "tempest-network-smoke--943664961", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb2e9cc-99", "ovs_interfaceid": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 23 21:16:25 compute-1 nova_compute[230183]: 2025-11-23 21:16:25.235 230187 DEBUG nova.network.os_vif_util [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "address": "fa:16:3e:19:cb:53", "network": {"id": "2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7", "bridge": "br-int", "label": "tempest-network-smoke--943664961", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb2e9cc-99", "ovs_interfaceid": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:16:25 compute-1 nova_compute[230183]: 2025-11-23 21:16:25.236 230187 DEBUG nova.network.os_vif_util [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:19:cb:53,bridge_name='br-int',has_traffic_filtering=True,id=deb2e9cc-993f-4f9a-934e-0921fdf22170,network=Network(2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb2e9cc-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:16:25 compute-1 nova_compute[230183]: 2025-11-23 21:16:25.236 230187 DEBUG os_vif [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:cb:53,bridge_name='br-int',has_traffic_filtering=True,id=deb2e9cc-993f-4f9a-934e-0921fdf22170,network=Network(2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb2e9cc-99') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 23 21:16:25 compute-1 podman[243499]: 2025-11-23 21:16:25.237594455 +0000 UTC m=+0.083580872 container cleanup 5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 21:16:25 compute-1 nova_compute[230183]: 2025-11-23 21:16:25.240 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:25 compute-1 nova_compute[230183]: 2025-11-23 21:16:25.241 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdeb2e9cc-99, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:16:25 compute-1 nova_compute[230183]: 2025-11-23 21:16:25.242 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:25 compute-1 systemd[1]: libpod-conmon-5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0.scope: Deactivated successfully.
Nov 23 21:16:25 compute-1 nova_compute[230183]: 2025-11-23 21:16:25.245 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:16:25 compute-1 nova_compute[230183]: 2025-11-23 21:16:25.247 230187 INFO os_vif [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:cb:53,bridge_name='br-int',has_traffic_filtering=True,id=deb2e9cc-993f-4f9a-934e-0921fdf22170,network=Network(2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb2e9cc-99')
Nov 23 21:16:25 compute-1 podman[243538]: 2025-11-23 21:16:25.306959626 +0000 UTC m=+0.047819007 container remove 5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 23 21:16:25 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.312 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[9834d19b-feda-4f8d-9813-4ba3f36e92fd]: (4, ('Sun Nov 23 09:16:25 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7 (5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0)\n5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0\nSun Nov 23 09:16:25 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7 (5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0)\n5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:16:25 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.313 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[f9838921-041e-46de-a916-80e25be0de9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:16:25 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.314 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b2cbb2b-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:16:25 compute-1 kernel: tap2b2cbb2b-40: left promiscuous mode
Nov 23 21:16:25 compute-1 nova_compute[230183]: 2025-11-23 21:16:25.316 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:25 compute-1 nova_compute[230183]: 2025-11-23 21:16:25.331 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:25 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.332 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[c185db4e-7e13-4c8e-a0ae-ca635342fc97]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:16:25 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.355 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[7e170269-1475-48a0-841b-ff5a5732ced8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:16:25 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.356 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[0aa58569-b79f-4548-806d-8986e5db16b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:16:25 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.376 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[eec2a321-4ac0-425a-bd69-f4d346ed71c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450357, 'reachable_time': 27697, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243570, 'error': None, 'target': 'ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:16:25 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.379 142272 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 23 21:16:25 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.379 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[5a95c1cd-43bc-45ba-9f13-3e3b149831bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:16:25 compute-1 systemd[1]: run-netns-ovnmeta\x2d2b2cbb2b\x2d4635\x2d48f6\x2d97b3\x2db4c96d1d06f7.mount: Deactivated successfully.
Nov 23 21:16:25 compute-1 nova_compute[230183]: 2025-11-23 21:16:25.656 230187 INFO nova.virt.libvirt.driver [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Deleting instance files /var/lib/nova/instances/4384cda9-2a35-4df4-84b1-a045a41852ac_del
Nov 23 21:16:25 compute-1 nova_compute[230183]: 2025-11-23 21:16:25.657 230187 INFO nova.virt.libvirt.driver [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Deletion of /var/lib/nova/instances/4384cda9-2a35-4df4-84b1-a045a41852ac_del complete
Nov 23 21:16:25 compute-1 nova_compute[230183]: 2025-11-23 21:16:25.715 230187 INFO nova.compute.manager [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Took 0.72 seconds to destroy the instance on the hypervisor.
Nov 23 21:16:25 compute-1 nova_compute[230183]: 2025-11-23 21:16:25.716 230187 DEBUG oslo.service.loopingcall [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 23 21:16:25 compute-1 nova_compute[230183]: 2025-11-23 21:16:25.716 230187 DEBUG nova.compute.manager [-] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 23 21:16:25 compute-1 nova_compute[230183]: 2025-11-23 21:16:25.716 230187 DEBUG nova.network.neutron [-] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 23 21:16:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:25.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:25 compute-1 sudo[243573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 21:16:25 compute-1 sudo[243573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:16:25 compute-1 sudo[243573]: pam_unix(sudo:session): session closed for user root
Nov 23 21:16:26 compute-1 nova_compute[230183]: 2025-11-23 21:16:26.652 230187 DEBUG nova.network.neutron [req-d949908f-d638-426d-a6d8-4c3482289077 req-61db7f0f-ce0b-4efc-bc36-f52e62bd738c 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Updated VIF entry in instance network info cache for port deb2e9cc-993f-4f9a-934e-0921fdf22170. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 23 21:16:26 compute-1 nova_compute[230183]: 2025-11-23 21:16:26.653 230187 DEBUG nova.network.neutron [req-d949908f-d638-426d-a6d8-4c3482289077 req-61db7f0f-ce0b-4efc-bc36-f52e62bd738c 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Updating instance_info_cache with network_info: [{"id": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "address": "fa:16:3e:19:cb:53", "network": {"id": "2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7", "bridge": "br-int", "label": "tempest-network-smoke--943664961", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb2e9cc-99", "ovs_interfaceid": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:16:26 compute-1 nova_compute[230183]: 2025-11-23 21:16:26.673 230187 DEBUG oslo_concurrency.lockutils [req-d949908f-d638-426d-a6d8-4c3482289077 req-61db7f0f-ce0b-4efc-bc36-f52e62bd738c 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:16:26 compute-1 ceph-mon[80135]: pgmap v1051: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 20 KiB/s wr, 2 op/s
Nov 23 21:16:26 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:16:26 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:16:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:26.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:26 compute-1 sudo[243598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:16:26 compute-1 sudo[243598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:16:26 compute-1 sudo[243598]: pam_unix(sudo:session): session closed for user root
Nov 23 21:16:26 compute-1 nova_compute[230183]: 2025-11-23 21:16:26.798 230187 DEBUG nova.network.neutron [-] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:16:26 compute-1 nova_compute[230183]: 2025-11-23 21:16:26.813 230187 INFO nova.compute.manager [-] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Took 1.10 seconds to deallocate network for instance.
Nov 23 21:16:26 compute-1 nova_compute[230183]: 2025-11-23 21:16:26.864 230187 DEBUG oslo_concurrency.lockutils [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:16:26 compute-1 nova_compute[230183]: 2025-11-23 21:16:26.865 230187 DEBUG oslo_concurrency.lockutils [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:16:26 compute-1 nova_compute[230183]: 2025-11-23 21:16:26.939 230187 DEBUG oslo_concurrency.processutils [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:16:26 compute-1 nova_compute[230183]: 2025-11-23 21:16:26.982 230187 DEBUG nova.compute.manager [req-0c74657f-17ba-429c-a875-fde7ed452122 req-e25fcdd5-0cad-452c-96fa-99704bd09416 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Received event network-vif-unplugged-deb2e9cc-993f-4f9a-934e-0921fdf22170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:16:26 compute-1 nova_compute[230183]: 2025-11-23 21:16:26.983 230187 DEBUG oslo_concurrency.lockutils [req-0c74657f-17ba-429c-a875-fde7ed452122 req-e25fcdd5-0cad-452c-96fa-99704bd09416 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:16:26 compute-1 nova_compute[230183]: 2025-11-23 21:16:26.984 230187 DEBUG oslo_concurrency.lockutils [req-0c74657f-17ba-429c-a875-fde7ed452122 req-e25fcdd5-0cad-452c-96fa-99704bd09416 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:16:26 compute-1 nova_compute[230183]: 2025-11-23 21:16:26.984 230187 DEBUG oslo_concurrency.lockutils [req-0c74657f-17ba-429c-a875-fde7ed452122 req-e25fcdd5-0cad-452c-96fa-99704bd09416 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:16:26 compute-1 nova_compute[230183]: 2025-11-23 21:16:26.985 230187 DEBUG nova.compute.manager [req-0c74657f-17ba-429c-a875-fde7ed452122 req-e25fcdd5-0cad-452c-96fa-99704bd09416 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] No waiting events found dispatching network-vif-unplugged-deb2e9cc-993f-4f9a-934e-0921fdf22170 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:16:26 compute-1 nova_compute[230183]: 2025-11-23 21:16:26.985 230187 WARNING nova.compute.manager [req-0c74657f-17ba-429c-a875-fde7ed452122 req-e25fcdd5-0cad-452c-96fa-99704bd09416 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Received unexpected event network-vif-unplugged-deb2e9cc-993f-4f9a-934e-0921fdf22170 for instance with vm_state deleted and task_state None.
Nov 23 21:16:26 compute-1 nova_compute[230183]: 2025-11-23 21:16:26.986 230187 DEBUG nova.compute.manager [req-0c74657f-17ba-429c-a875-fde7ed452122 req-e25fcdd5-0cad-452c-96fa-99704bd09416 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Received event network-vif-plugged-deb2e9cc-993f-4f9a-934e-0921fdf22170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:16:26 compute-1 nova_compute[230183]: 2025-11-23 21:16:26.986 230187 DEBUG oslo_concurrency.lockutils [req-0c74657f-17ba-429c-a875-fde7ed452122 req-e25fcdd5-0cad-452c-96fa-99704bd09416 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:16:26 compute-1 nova_compute[230183]: 2025-11-23 21:16:26.987 230187 DEBUG oslo_concurrency.lockutils [req-0c74657f-17ba-429c-a875-fde7ed452122 req-e25fcdd5-0cad-452c-96fa-99704bd09416 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:16:26 compute-1 nova_compute[230183]: 2025-11-23 21:16:26.987 230187 DEBUG oslo_concurrency.lockutils [req-0c74657f-17ba-429c-a875-fde7ed452122 req-e25fcdd5-0cad-452c-96fa-99704bd09416 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:16:26 compute-1 nova_compute[230183]: 2025-11-23 21:16:26.988 230187 DEBUG nova.compute.manager [req-0c74657f-17ba-429c-a875-fde7ed452122 req-e25fcdd5-0cad-452c-96fa-99704bd09416 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] No waiting events found dispatching network-vif-plugged-deb2e9cc-993f-4f9a-934e-0921fdf22170 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:16:26 compute-1 nova_compute[230183]: 2025-11-23 21:16:26.988 230187 WARNING nova.compute.manager [req-0c74657f-17ba-429c-a875-fde7ed452122 req-e25fcdd5-0cad-452c-96fa-99704bd09416 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Received unexpected event network-vif-plugged-deb2e9cc-993f-4f9a-934e-0921fdf22170 for instance with vm_state deleted and task_state None.
Nov 23 21:16:27 compute-1 nova_compute[230183]: 2025-11-23 21:16:27.170 230187 DEBUG nova.compute.manager [req-4a7c358d-d791-4ea6-be6f-44bc64ca7be7 req-48b25b98-4a28-484e-9b9f-3b3e91394d1f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Received event network-vif-deleted-deb2e9cc-993f-4f9a-934e-0921fdf22170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:16:27 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:16:27 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:16:27 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/318558322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:16:27 compute-1 nova_compute[230183]: 2025-11-23 21:16:27.469 230187 DEBUG oslo_concurrency.processutils [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:16:27 compute-1 nova_compute[230183]: 2025-11-23 21:16:27.479 230187 DEBUG nova.compute.provider_tree [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:16:27 compute-1 nova_compute[230183]: 2025-11-23 21:16:27.510 230187 DEBUG nova.scheduler.client.report [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:16:27 compute-1 nova_compute[230183]: 2025-11-23 21:16:27.536 230187 DEBUG oslo_concurrency.lockutils [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:16:27 compute-1 nova_compute[230183]: 2025-11-23 21:16:27.545 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:27 compute-1 nova_compute[230183]: 2025-11-23 21:16:27.562 230187 INFO nova.scheduler.client.report [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Deleted allocations for instance 4384cda9-2a35-4df4-84b1-a045a41852ac
Nov 23 21:16:27 compute-1 nova_compute[230183]: 2025-11-23 21:16:27.653 230187 DEBUG oslo_concurrency.lockutils [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:16:27 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/318558322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:16:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.003000078s ======
Nov 23 21:16:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:27.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000078s
Nov 23 21:16:28 compute-1 ceph-mon[80135]: pgmap v1052: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 20 KiB/s wr, 2 op/s
Nov 23 21:16:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:28.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:29.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:30 compute-1 nova_compute[230183]: 2025-11-23 21:16:30.244 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:30 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:16:30.623 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:16:30 compute-1 ceph-mon[80135]: pgmap v1053: 337 pgs: 337 active+clean; 135 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 23 KiB/s wr, 30 op/s
Nov 23 21:16:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:30.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:31 compute-1 podman[243648]: 2025-11-23 21:16:31.650295887 +0000 UTC m=+0.052020190 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 21:16:31 compute-1 podman[243647]: 2025-11-23 21:16:31.669713885 +0000 UTC m=+0.083032268 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 23 21:16:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:31.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:32 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:16:32 compute-1 nova_compute[230183]: 2025-11-23 21:16:32.588 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:32 compute-1 ceph-mon[80135]: pgmap v1054: 337 pgs: 337 active+clean; 121 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 9.0 KiB/s wr, 34 op/s
Nov 23 21:16:32 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2926754021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:16:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:16:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:32.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:16:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:16:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:33.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:34 compute-1 ceph-mon[80135]: pgmap v1055: 337 pgs: 337 active+clean; 121 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 7.8 KiB/s wr, 30 op/s
Nov 23 21:16:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:34.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:35 compute-1 nova_compute[230183]: 2025-11-23 21:16:35.247 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:35 compute-1 podman[243695]: 2025-11-23 21:16:35.648557687 +0000 UTC m=+0.061847802 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 23 21:16:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:35.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:36 compute-1 nova_compute[230183]: 2025-11-23 21:16:36.166 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:36 compute-1 nova_compute[230183]: 2025-11-23 21:16:36.198 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:16:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:36.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:16:36 compute-1 ceph-mon[80135]: pgmap v1056: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 9.0 KiB/s wr, 57 op/s
Nov 23 21:16:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:16:37 compute-1 nova_compute[230183]: 2025-11-23 21:16:37.635 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:37 compute-1 ceph-mon[80135]: pgmap v1057: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 4.3 KiB/s wr, 56 op/s
Nov 23 21:16:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:37.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:16:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:38.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:16:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:16:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:39.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:16:40 compute-1 ceph-mon[80135]: pgmap v1058: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 4.3 KiB/s wr, 56 op/s
Nov 23 21:16:40 compute-1 nova_compute[230183]: 2025-11-23 21:16:40.224 230187 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763932585.222552, 4384cda9-2a35-4df4-84b1-a045a41852ac => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:16:40 compute-1 nova_compute[230183]: 2025-11-23 21:16:40.224 230187 INFO nova.compute.manager [-] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] VM Stopped (Lifecycle Event)
Nov 23 21:16:40 compute-1 nova_compute[230183]: 2025-11-23 21:16:40.250 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:40 compute-1 nova_compute[230183]: 2025-11-23 21:16:40.268 230187 DEBUG nova.compute.manager [None req-03a0665d-db7e-4407-948d-c3a7c632607f - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:16:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:16:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:40.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:16:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:41.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:42 compute-1 ceph-mon[80135]: pgmap v1059: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.5 KiB/s wr, 32 op/s
Nov 23 21:16:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:16:42 compute-1 nova_compute[230183]: 2025-11-23 21:16:42.686 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:42.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:43.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:44 compute-1 ceph-mon[80135]: pgmap v1060: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 23 21:16:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:16:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:44.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:16:45 compute-1 nova_compute[230183]: 2025-11-23 21:16:45.253 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:45.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:46 compute-1 ceph-mon[80135]: pgmap v1061: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 23 21:16:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:16:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:46.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:16:46 compute-1 sudo[243722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:16:46 compute-1 sudo[243722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:16:46 compute-1 sudo[243722]: pam_unix(sudo:session): session closed for user root
Nov 23 21:16:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:16:47 compute-1 nova_compute[230183]: 2025-11-23 21:16:47.689 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:47.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:48 compute-1 ceph-mon[80135]: pgmap v1062: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 21:16:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:16:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:48.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:16:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:49.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:16:50 compute-1 nova_compute[230183]: 2025-11-23 21:16:50.256 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:50 compute-1 ceph-mon[80135]: pgmap v1063: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 21:16:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:16:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:50.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:16:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:16:51.074 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:16:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:16:51.075 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:16:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:16:51.075 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:16:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:51.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:52 compute-1 ceph-mon[80135]: pgmap v1064: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 23 21:16:52 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:16:52 compute-1 nova_compute[230183]: 2025-11-23 21:16:52.692 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:16:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:52.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:16:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:53.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:54 compute-1 ceph-mon[80135]: pgmap v1065: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:16:54 compute-1 nova_compute[230183]: 2025-11-23 21:16:54.468 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:16:54 compute-1 nova_compute[230183]: 2025-11-23 21:16:54.468 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:16:54 compute-1 nova_compute[230183]: 2025-11-23 21:16:54.486 230187 DEBUG nova.compute.manager [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 23 21:16:54 compute-1 nova_compute[230183]: 2025-11-23 21:16:54.554 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:16:54 compute-1 nova_compute[230183]: 2025-11-23 21:16:54.555 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:16:54 compute-1 nova_compute[230183]: 2025-11-23 21:16:54.562 230187 DEBUG nova.virt.hardware [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 23 21:16:54 compute-1 nova_compute[230183]: 2025-11-23 21:16:54.562 230187 INFO nova.compute.claims [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Claim successful on node compute-1.ctlplane.example.com
Nov 23 21:16:54 compute-1 nova_compute[230183]: 2025-11-23 21:16:54.669 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:16:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:16:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:54.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:16:55 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:16:55 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2342618095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:16:55 compute-1 nova_compute[230183]: 2025-11-23 21:16:55.130 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:16:55 compute-1 nova_compute[230183]: 2025-11-23 21:16:55.140 230187 DEBUG nova.compute.provider_tree [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:16:55 compute-1 nova_compute[230183]: 2025-11-23 21:16:55.155 230187 DEBUG nova.scheduler.client.report [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:16:55 compute-1 nova_compute[230183]: 2025-11-23 21:16:55.176 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:16:55 compute-1 nova_compute[230183]: 2025-11-23 21:16:55.177 230187 DEBUG nova.compute.manager [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 23 21:16:55 compute-1 nova_compute[230183]: 2025-11-23 21:16:55.226 230187 DEBUG nova.compute.manager [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 23 21:16:55 compute-1 nova_compute[230183]: 2025-11-23 21:16:55.227 230187 DEBUG nova.network.neutron [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 23 21:16:55 compute-1 nova_compute[230183]: 2025-11-23 21:16:55.259 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:55 compute-1 nova_compute[230183]: 2025-11-23 21:16:55.263 230187 INFO nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 23 21:16:55 compute-1 nova_compute[230183]: 2025-11-23 21:16:55.282 230187 DEBUG nova.compute.manager [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 23 21:16:55 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2342618095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:16:55 compute-1 nova_compute[230183]: 2025-11-23 21:16:55.366 230187 DEBUG nova.compute.manager [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 23 21:16:55 compute-1 nova_compute[230183]: 2025-11-23 21:16:55.367 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 23 21:16:55 compute-1 nova_compute[230183]: 2025-11-23 21:16:55.367 230187 INFO nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Creating image(s)
Nov 23 21:16:55 compute-1 nova_compute[230183]: 2025-11-23 21:16:55.388 230187 DEBUG nova.storage.rbd_utils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:16:55 compute-1 nova_compute[230183]: 2025-11-23 21:16:55.413 230187 DEBUG nova.storage.rbd_utils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:16:55 compute-1 nova_compute[230183]: 2025-11-23 21:16:55.435 230187 DEBUG nova.storage.rbd_utils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:16:55 compute-1 nova_compute[230183]: 2025-11-23 21:16:55.438 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:16:55 compute-1 nova_compute[230183]: 2025-11-23 21:16:55.459 230187 DEBUG nova.policy [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9fb5352c62684f2ba3a326a953a10dfe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '782593db60784ab8bff41fe87d72ff5f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 23 21:16:55 compute-1 nova_compute[230183]: 2025-11-23 21:16:55.513 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:16:55 compute-1 nova_compute[230183]: 2025-11-23 21:16:55.514 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:16:55 compute-1 nova_compute[230183]: 2025-11-23 21:16:55.514 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:16:55 compute-1 nova_compute[230183]: 2025-11-23 21:16:55.515 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:16:55 compute-1 nova_compute[230183]: 2025-11-23 21:16:55.538 230187 DEBUG nova.storage.rbd_utils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:16:55 compute-1 nova_compute[230183]: 2025-11-23 21:16:55.541 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:16:55 compute-1 nova_compute[230183]: 2025-11-23 21:16:55.830 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:16:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:55.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:55 compute-1 nova_compute[230183]: 2025-11-23 21:16:55.899 230187 DEBUG nova.storage.rbd_utils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] resizing rbd image f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 23 21:16:56 compute-1 nova_compute[230183]: 2025-11-23 21:16:56.037 230187 DEBUG nova.objects.instance [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'migration_context' on Instance uuid f638f2b4-bdf0-46c2-81d0-143511a01fb5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:16:56 compute-1 nova_compute[230183]: 2025-11-23 21:16:56.055 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 23 21:16:56 compute-1 nova_compute[230183]: 2025-11-23 21:16:56.055 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Ensure instance console log exists: /var/lib/nova/instances/f638f2b4-bdf0-46c2-81d0-143511a01fb5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 23 21:16:56 compute-1 nova_compute[230183]: 2025-11-23 21:16:56.056 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:16:56 compute-1 nova_compute[230183]: 2025-11-23 21:16:56.056 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:16:56 compute-1 nova_compute[230183]: 2025-11-23 21:16:56.057 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:16:56 compute-1 nova_compute[230183]: 2025-11-23 21:16:56.303 230187 DEBUG nova.network.neutron [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Successfully created port: 984010df-e5b5-45c2-9db5-f0046f5efd50 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 23 21:16:56 compute-1 ceph-mon[80135]: pgmap v1066: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:16:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:56.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:16:57 compute-1 nova_compute[230183]: 2025-11-23 21:16:57.694 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:16:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:57.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:58 compute-1 ceph-mon[80135]: pgmap v1067: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:16:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:16:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:58.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:16:59 compute-1 nova_compute[230183]: 2025-11-23 21:16:59.832 230187 DEBUG nova.network.neutron [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Successfully updated port: 984010df-e5b5-45c2-9db5-f0046f5efd50 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 23 21:16:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:16:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:16:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:59.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:16:59 compute-1 nova_compute[230183]: 2025-11-23 21:16:59.849 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:16:59 compute-1 nova_compute[230183]: 2025-11-23 21:16:59.850 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquired lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:16:59 compute-1 nova_compute[230183]: 2025-11-23 21:16:59.850 230187 DEBUG nova.network.neutron [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 23 21:16:59 compute-1 nova_compute[230183]: 2025-11-23 21:16:59.961 230187 DEBUG nova.compute.manager [req-c0394ebe-1cec-485c-a6f2-0060a90b62ed req-bea66f36-f6d9-476a-ad06-c9549ba34201 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Received event network-changed-984010df-e5b5-45c2-9db5-f0046f5efd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:16:59 compute-1 nova_compute[230183]: 2025-11-23 21:16:59.962 230187 DEBUG nova.compute.manager [req-c0394ebe-1cec-485c-a6f2-0060a90b62ed req-bea66f36-f6d9-476a-ad06-c9549ba34201 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Refreshing instance network info cache due to event network-changed-984010df-e5b5-45c2-9db5-f0046f5efd50. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 23 21:16:59 compute-1 nova_compute[230183]: 2025-11-23 21:16:59.962 230187 DEBUG oslo_concurrency.lockutils [req-c0394ebe-1cec-485c-a6f2-0060a90b62ed req-bea66f36-f6d9-476a-ad06-c9549ba34201 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:17:00 compute-1 nova_compute[230183]: 2025-11-23 21:17:00.025 230187 DEBUG nova.network.neutron [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 23 21:17:00 compute-1 nova_compute[230183]: 2025-11-23 21:17:00.261 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:00 compute-1 ceph-mon[80135]: pgmap v1068: 337 pgs: 337 active+clean; 73 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 1.6 MiB/s wr, 15 op/s
Nov 23 21:17:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:17:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:00.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:17:01 compute-1 nova_compute[230183]: 2025-11-23 21:17:01.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:17:01 compute-1 nova_compute[230183]: 2025-11-23 21:17:01.513 230187 DEBUG nova.network.neutron [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Updating instance_info_cache with network_info: [{"id": "984010df-e5b5-45c2-9db5-f0046f5efd50", "address": "fa:16:3e:63:db:14", "network": {"id": "45f4166e-7bc0-4981-9683-ade606fa5710", "bridge": "br-int", "label": "tempest-network-smoke--1927222341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap984010df-e5", "ovs_interfaceid": "984010df-e5b5-45c2-9db5-f0046f5efd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:17:01 compute-1 nova_compute[230183]: 2025-11-23 21:17:01.533 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Releasing lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:17:01 compute-1 nova_compute[230183]: 2025-11-23 21:17:01.533 230187 DEBUG nova.compute.manager [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Instance network_info: |[{"id": "984010df-e5b5-45c2-9db5-f0046f5efd50", "address": "fa:16:3e:63:db:14", "network": {"id": "45f4166e-7bc0-4981-9683-ade606fa5710", "bridge": "br-int", "label": "tempest-network-smoke--1927222341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap984010df-e5", "ovs_interfaceid": "984010df-e5b5-45c2-9db5-f0046f5efd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 23 21:17:01 compute-1 nova_compute[230183]: 2025-11-23 21:17:01.534 230187 DEBUG oslo_concurrency.lockutils [req-c0394ebe-1cec-485c-a6f2-0060a90b62ed req-bea66f36-f6d9-476a-ad06-c9549ba34201 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:17:01 compute-1 nova_compute[230183]: 2025-11-23 21:17:01.535 230187 DEBUG nova.network.neutron [req-c0394ebe-1cec-485c-a6f2-0060a90b62ed req-bea66f36-f6d9-476a-ad06-c9549ba34201 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Refreshing network info cache for port 984010df-e5b5-45c2-9db5-f0046f5efd50 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 23 21:17:01 compute-1 nova_compute[230183]: 2025-11-23 21:17:01.540 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Start _get_guest_xml network_info=[{"id": "984010df-e5b5-45c2-9db5-f0046f5efd50", "address": "fa:16:3e:63:db:14", "network": {"id": "45f4166e-7bc0-4981-9683-ade606fa5710", "bridge": "br-int", "label": "tempest-network-smoke--1927222341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap984010df-e5", "ovs_interfaceid": "984010df-e5b5-45c2-9db5-f0046f5efd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'guest_format': None, 'encrypted': False, 'size': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': '3c45fa6c-8a99-4359-a34e-d89f4e1e77d0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 23 21:17:01 compute-1 nova_compute[230183]: 2025-11-23 21:17:01.547 230187 WARNING nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:17:01 compute-1 nova_compute[230183]: 2025-11-23 21:17:01.554 230187 DEBUG nova.virt.libvirt.host [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 23 21:17:01 compute-1 nova_compute[230183]: 2025-11-23 21:17:01.555 230187 DEBUG nova.virt.libvirt.host [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 23 21:17:01 compute-1 nova_compute[230183]: 2025-11-23 21:17:01.558 230187 DEBUG nova.virt.libvirt.host [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 23 21:17:01 compute-1 nova_compute[230183]: 2025-11-23 21:17:01.559 230187 DEBUG nova.virt.libvirt.host [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 23 21:17:01 compute-1 nova_compute[230183]: 2025-11-23 21:17:01.559 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 23 21:17:01 compute-1 nova_compute[230183]: 2025-11-23 21:17:01.559 230187 DEBUG nova.virt.hardware [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-23T21:05:42Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='56044b93-2979-48aa-b67f-c37e1b489306',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 23 21:17:01 compute-1 nova_compute[230183]: 2025-11-23 21:17:01.560 230187 DEBUG nova.virt.hardware [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 23 21:17:01 compute-1 nova_compute[230183]: 2025-11-23 21:17:01.560 230187 DEBUG nova.virt.hardware [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 23 21:17:01 compute-1 nova_compute[230183]: 2025-11-23 21:17:01.560 230187 DEBUG nova.virt.hardware [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 23 21:17:01 compute-1 nova_compute[230183]: 2025-11-23 21:17:01.560 230187 DEBUG nova.virt.hardware [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 23 21:17:01 compute-1 nova_compute[230183]: 2025-11-23 21:17:01.560 230187 DEBUG nova.virt.hardware [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 23 21:17:01 compute-1 nova_compute[230183]: 2025-11-23 21:17:01.561 230187 DEBUG nova.virt.hardware [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 23 21:17:01 compute-1 nova_compute[230183]: 2025-11-23 21:17:01.561 230187 DEBUG nova.virt.hardware [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 23 21:17:01 compute-1 nova_compute[230183]: 2025-11-23 21:17:01.561 230187 DEBUG nova.virt.hardware [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 23 21:17:01 compute-1 nova_compute[230183]: 2025-11-23 21:17:01.561 230187 DEBUG nova.virt.hardware [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 23 21:17:01 compute-1 nova_compute[230183]: 2025-11-23 21:17:01.561 230187 DEBUG nova.virt.hardware [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 23 21:17:01 compute-1 nova_compute[230183]: 2025-11-23 21:17:01.564 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:17:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:01.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 21:17:02 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2602542080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.019 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.056 230187 DEBUG nova.storage.rbd_utils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.060 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:17:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:17:02 compute-1 ceph-mon[80135]: pgmap v1069: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 21:17:02 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2602542080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:17:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 21:17:02 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/29360451' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.495 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.497 230187 DEBUG nova.virt.libvirt.vif [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:16:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-793817431',display_name='tempest-TestNetworkBasicOps-server-793817431',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-793817431',id=13,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFyCHalitTYHY+i3k7NGtIz/axejAHzuAlVnR4e5KMHIjAE7Fj+3ovJsaUKuZw9NPKsJ0qVqgikm8FkvL2Pu0+xYGcJBA97J85NKDWDS+eoNhScnnixkt+4uoxHyqB5n7A==',key_name='tempest-TestNetworkBasicOps-1599562746',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-gf1xk21n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:16:55Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=f638f2b4-bdf0-46c2-81d0-143511a01fb5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "984010df-e5b5-45c2-9db5-f0046f5efd50", "address": "fa:16:3e:63:db:14", "network": {"id": "45f4166e-7bc0-4981-9683-ade606fa5710", "bridge": "br-int", "label": "tempest-network-smoke--1927222341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap984010df-e5", "ovs_interfaceid": "984010df-e5b5-45c2-9db5-f0046f5efd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.498 230187 DEBUG nova.network.os_vif_util [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "984010df-e5b5-45c2-9db5-f0046f5efd50", "address": "fa:16:3e:63:db:14", "network": {"id": "45f4166e-7bc0-4981-9683-ade606fa5710", "bridge": "br-int", "label": "tempest-network-smoke--1927222341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap984010df-e5", "ovs_interfaceid": "984010df-e5b5-45c2-9db5-f0046f5efd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.499 230187 DEBUG nova.network.os_vif_util [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:db:14,bridge_name='br-int',has_traffic_filtering=True,id=984010df-e5b5-45c2-9db5-f0046f5efd50,network=Network(45f4166e-7bc0-4981-9683-ade606fa5710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap984010df-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.500 230187 DEBUG nova.objects.instance [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'pci_devices' on Instance uuid f638f2b4-bdf0-46c2-81d0-143511a01fb5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.523 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] End _get_guest_xml xml=<domain type="kvm">
Nov 23 21:17:02 compute-1 nova_compute[230183]:   <uuid>f638f2b4-bdf0-46c2-81d0-143511a01fb5</uuid>
Nov 23 21:17:02 compute-1 nova_compute[230183]:   <name>instance-0000000d</name>
Nov 23 21:17:02 compute-1 nova_compute[230183]:   <memory>131072</memory>
Nov 23 21:17:02 compute-1 nova_compute[230183]:   <vcpu>1</vcpu>
Nov 23 21:17:02 compute-1 nova_compute[230183]:   <metadata>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <nova:name>tempest-TestNetworkBasicOps-server-793817431</nova:name>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <nova:creationTime>2025-11-23 21:17:01</nova:creationTime>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <nova:flavor name="m1.nano">
Nov 23 21:17:02 compute-1 nova_compute[230183]:         <nova:memory>128</nova:memory>
Nov 23 21:17:02 compute-1 nova_compute[230183]:         <nova:disk>1</nova:disk>
Nov 23 21:17:02 compute-1 nova_compute[230183]:         <nova:swap>0</nova:swap>
Nov 23 21:17:02 compute-1 nova_compute[230183]:         <nova:ephemeral>0</nova:ephemeral>
Nov 23 21:17:02 compute-1 nova_compute[230183]:         <nova:vcpus>1</nova:vcpus>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       </nova:flavor>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <nova:owner>
Nov 23 21:17:02 compute-1 nova_compute[230183]:         <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 21:17:02 compute-1 nova_compute[230183]:         <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       </nova:owner>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <nova:ports>
Nov 23 21:17:02 compute-1 nova_compute[230183]:         <nova:port uuid="984010df-e5b5-45c2-9db5-f0046f5efd50">
Nov 23 21:17:02 compute-1 nova_compute[230183]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:         </nova:port>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       </nova:ports>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     </nova:instance>
Nov 23 21:17:02 compute-1 nova_compute[230183]:   </metadata>
Nov 23 21:17:02 compute-1 nova_compute[230183]:   <sysinfo type="smbios">
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <system>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <entry name="manufacturer">RDO</entry>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <entry name="product">OpenStack Compute</entry>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <entry name="serial">f638f2b4-bdf0-46c2-81d0-143511a01fb5</entry>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <entry name="uuid">f638f2b4-bdf0-46c2-81d0-143511a01fb5</entry>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <entry name="family">Virtual Machine</entry>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     </system>
Nov 23 21:17:02 compute-1 nova_compute[230183]:   </sysinfo>
Nov 23 21:17:02 compute-1 nova_compute[230183]:   <os>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <boot dev="hd"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <smbios mode="sysinfo"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:   </os>
Nov 23 21:17:02 compute-1 nova_compute[230183]:   <features>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <acpi/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <apic/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <vmcoreinfo/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:   </features>
Nov 23 21:17:02 compute-1 nova_compute[230183]:   <clock offset="utc">
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <timer name="pit" tickpolicy="delay"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <timer name="hpet" present="no"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:   </clock>
Nov 23 21:17:02 compute-1 nova_compute[230183]:   <cpu mode="host-model" match="exact">
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <topology sockets="1" cores="1" threads="1"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:   </cpu>
Nov 23 21:17:02 compute-1 nova_compute[230183]:   <devices>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <disk type="network" device="disk">
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <driver type="raw" cache="none"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <source protocol="rbd" name="vms/f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk">
Nov 23 21:17:02 compute-1 nova_compute[230183]:         <host name="192.168.122.100" port="6789"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:         <host name="192.168.122.102" port="6789"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:         <host name="192.168.122.101" port="6789"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       </source>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <auth username="openstack">
Nov 23 21:17:02 compute-1 nova_compute[230183]:         <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <target dev="vda" bus="virtio"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <disk type="network" device="cdrom">
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <driver type="raw" cache="none"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <source protocol="rbd" name="vms/f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk.config">
Nov 23 21:17:02 compute-1 nova_compute[230183]:         <host name="192.168.122.100" port="6789"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:         <host name="192.168.122.102" port="6789"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:         <host name="192.168.122.101" port="6789"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       </source>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <auth username="openstack">
Nov 23 21:17:02 compute-1 nova_compute[230183]:         <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       </auth>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <target dev="sda" bus="sata"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     </disk>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <interface type="ethernet">
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <mac address="fa:16:3e:63:db:14"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <model type="virtio"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <driver name="vhost" rx_queue_size="512"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <mtu size="1442"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <target dev="tap984010df-e5"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     </interface>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <serial type="pty">
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <log file="/var/lib/nova/instances/f638f2b4-bdf0-46c2-81d0-143511a01fb5/console.log" append="off"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     </serial>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <video>
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <model type="virtio"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     </video>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <input type="tablet" bus="usb"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <rng model="virtio">
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <backend model="random">/dev/urandom</backend>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     </rng>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <controller type="pci" model="pcie-root-port"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <controller type="usb" index="0"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     <memballoon model="virtio">
Nov 23 21:17:02 compute-1 nova_compute[230183]:       <stats period="10"/>
Nov 23 21:17:02 compute-1 nova_compute[230183]:     </memballoon>
Nov 23 21:17:02 compute-1 nova_compute[230183]:   </devices>
Nov 23 21:17:02 compute-1 nova_compute[230183]: </domain>
Nov 23 21:17:02 compute-1 nova_compute[230183]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.525 230187 DEBUG nova.compute.manager [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Preparing to wait for external event network-vif-plugged-984010df-e5b5-45c2-9db5-f0046f5efd50 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.525 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.525 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.526 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.526 230187 DEBUG nova.virt.libvirt.vif [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:16:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-793817431',display_name='tempest-TestNetworkBasicOps-server-793817431',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-793817431',id=13,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFyCHalitTYHY+i3k7NGtIz/axejAHzuAlVnR4e5KMHIjAE7Fj+3ovJsaUKuZw9NPKsJ0qVqgikm8FkvL2Pu0+xYGcJBA97J85NKDWDS+eoNhScnnixkt+4uoxHyqB5n7A==',key_name='tempest-TestNetworkBasicOps-1599562746',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-gf1xk21n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:16:55Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=f638f2b4-bdf0-46c2-81d0-143511a01fb5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "984010df-e5b5-45c2-9db5-f0046f5efd50", "address": "fa:16:3e:63:db:14", "network": {"id": "45f4166e-7bc0-4981-9683-ade606fa5710", "bridge": "br-int", "label": "tempest-network-smoke--1927222341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap984010df-e5", "ovs_interfaceid": "984010df-e5b5-45c2-9db5-f0046f5efd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.527 230187 DEBUG nova.network.os_vif_util [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "984010df-e5b5-45c2-9db5-f0046f5efd50", "address": "fa:16:3e:63:db:14", "network": {"id": "45f4166e-7bc0-4981-9683-ade606fa5710", "bridge": "br-int", "label": "tempest-network-smoke--1927222341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap984010df-e5", "ovs_interfaceid": "984010df-e5b5-45c2-9db5-f0046f5efd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.527 230187 DEBUG nova.network.os_vif_util [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:db:14,bridge_name='br-int',has_traffic_filtering=True,id=984010df-e5b5-45c2-9db5-f0046f5efd50,network=Network(45f4166e-7bc0-4981-9683-ade606fa5710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap984010df-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.528 230187 DEBUG os_vif [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:db:14,bridge_name='br-int',has_traffic_filtering=True,id=984010df-e5b5-45c2-9db5-f0046f5efd50,network=Network(45f4166e-7bc0-4981-9683-ade606fa5710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap984010df-e5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.528 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.529 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.529 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.531 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.531 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap984010df-e5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.532 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap984010df-e5, col_values=(('external_ids', {'iface-id': '984010df-e5b5-45c2-9db5-f0046f5efd50', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:63:db:14', 'vm-uuid': 'f638f2b4-bdf0-46c2-81d0-143511a01fb5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:17:02 compute-1 NetworkManager[49021]: <info>  [1763932622.5339] manager: (tap984010df-e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.533 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.536 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.539 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.539 230187 INFO os_vif [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:db:14,bridge_name='br-int',has_traffic_filtering=True,id=984010df-e5b5-45c2-9db5-f0046f5efd50,network=Network(45f4166e-7bc0-4981-9683-ade606fa5710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap984010df-e5')
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.595 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.595 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.596 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No VIF found with MAC fa:16:3e:63:db:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.596 230187 INFO nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Using config drive
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.628 230187 DEBUG nova.storage.rbd_utils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:17:02 compute-1 podman[244010]: 2025-11-23 21:17:02.657111179 +0000 UTC m=+0.066743032 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 21:17:02 compute-1 podman[244009]: 2025-11-23 21:17:02.661912808 +0000 UTC m=+0.085341609 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.697 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:17:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:02.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.987 230187 INFO nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Creating config drive at /var/lib/nova/instances/f638f2b4-bdf0-46c2-81d0-143511a01fb5/disk.config
Nov 23 21:17:02 compute-1 nova_compute[230183]: 2025-11-23 21:17:02.998 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f638f2b4-bdf0-46c2-81d0-143511a01fb5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpom06q4f4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:17:03 compute-1 nova_compute[230183]: 2025-11-23 21:17:03.127 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f638f2b4-bdf0-46c2-81d0-143511a01fb5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpom06q4f4" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:17:03 compute-1 nova_compute[230183]: 2025-11-23 21:17:03.170 230187 DEBUG nova.storage.rbd_utils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 23 21:17:03 compute-1 nova_compute[230183]: 2025-11-23 21:17:03.174 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f638f2b4-bdf0-46c2-81d0-143511a01fb5/disk.config f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:17:03 compute-1 nova_compute[230183]: 2025-11-23 21:17:03.327 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f638f2b4-bdf0-46c2-81d0-143511a01fb5/disk.config f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:17:03 compute-1 nova_compute[230183]: 2025-11-23 21:17:03.328 230187 INFO nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Deleting local config drive /var/lib/nova/instances/f638f2b4-bdf0-46c2-81d0-143511a01fb5/disk.config because it was imported into RBD.
Nov 23 21:17:03 compute-1 kernel: tap984010df-e5: entered promiscuous mode
Nov 23 21:17:03 compute-1 NetworkManager[49021]: <info>  [1763932623.3794] manager: (tap984010df-e5): new Tun device (/org/freedesktop/NetworkManager/Devices/78)
Nov 23 21:17:03 compute-1 nova_compute[230183]: 2025-11-23 21:17:03.379 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:03 compute-1 ovn_controller[132845]: 2025-11-23T21:17:03Z|00123|binding|INFO|Claiming lport 984010df-e5b5-45c2-9db5-f0046f5efd50 for this chassis.
Nov 23 21:17:03 compute-1 ovn_controller[132845]: 2025-11-23T21:17:03Z|00124|binding|INFO|984010df-e5b5-45c2-9db5-f0046f5efd50: Claiming fa:16:3e:63:db:14 10.100.0.10
Nov 23 21:17:03 compute-1 nova_compute[230183]: 2025-11-23 21:17:03.382 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:03 compute-1 nova_compute[230183]: 2025-11-23 21:17:03.384 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:03 compute-1 nova_compute[230183]: 2025-11-23 21:17:03.389 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.398 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:db:14 10.100.0.10'], port_security=['fa:16:3e:63:db:14 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f638f2b4-bdf0-46c2-81d0-143511a01fb5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45f4166e-7bc0-4981-9683-ade606fa5710', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ba908e3d-1310-4719-83e3-3b0a3d387de5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84c02252-eea5-46a3-9f52-20439e666f31, chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=984010df-e5b5-45c2-9db5-f0046f5efd50) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.399 142158 INFO neutron.agent.ovn.metadata.agent [-] Port 984010df-e5b5-45c2-9db5-f0046f5efd50 in datapath 45f4166e-7bc0-4981-9683-ade606fa5710 bound to our chassis
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.400 142158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 45f4166e-7bc0-4981-9683-ade606fa5710
Nov 23 21:17:03 compute-1 systemd-udevd[244120]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.411 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6f4ccc-3081-4b1e-a028-d067bd036273]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.411 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap45f4166e-71 in ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 23 21:17:03 compute-1 systemd-machined[193469]: New machine qemu-8-instance-0000000d.
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.415 233901 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap45f4166e-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.415 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5a9a6bf3-2ef2-4269-9a10-7557351a2f14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.415 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[fc522c9f-cf8f-4d2f-ad3a-d8efdc25c3a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:17:03 compute-1 NetworkManager[49021]: <info>  [1763932623.4245] device (tap984010df-e5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 21:17:03 compute-1 NetworkManager[49021]: <info>  [1763932623.4252] device (tap984010df-e5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.428 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[d97cbbbd-c896-43c6-9e61-4c5365510f0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:17:03 compute-1 systemd[1]: Started Virtual Machine qemu-8-instance-0000000d.
Nov 23 21:17:03 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/29360451' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 21:17:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:17:03 compute-1 ovn_controller[132845]: 2025-11-23T21:17:03Z|00125|binding|INFO|Setting lport 984010df-e5b5-45c2-9db5-f0046f5efd50 ovn-installed in OVS
Nov 23 21:17:03 compute-1 ovn_controller[132845]: 2025-11-23T21:17:03Z|00126|binding|INFO|Setting lport 984010df-e5b5-45c2-9db5-f0046f5efd50 up in Southbound
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.452 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[e321efab-7377-4ad8-bb01-e7c3b42ebc2b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:17:03 compute-1 nova_compute[230183]: 2025-11-23 21:17:03.453 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.484 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[de409230-f16b-4b1e-b218-be81c04863e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.489 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[0eadbd97-31c7-4dc2-bb23-425d7533a2ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:17:03 compute-1 NetworkManager[49021]: <info>  [1763932623.4904] manager: (tap45f4166e-70): new Veth device (/org/freedesktop/NetworkManager/Devices/79)
Nov 23 21:17:03 compute-1 systemd-udevd[244123]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.517 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[9422e344-0cda-4423-8cde-4158d614e8b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.520 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[2e7fe107-1bec-4538-accf-5435c6197dba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:17:03 compute-1 NetworkManager[49021]: <info>  [1763932623.5424] device (tap45f4166e-70): carrier: link connected
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.547 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[3e617df6-2320-47d9-a85d-b8c60c13ed51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:17:03 compute-1 nova_compute[230183]: 2025-11-23 21:17:03.550 230187 DEBUG nova.network.neutron [req-c0394ebe-1cec-485c-a6f2-0060a90b62ed req-bea66f36-f6d9-476a-ad06-c9549ba34201 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Updated VIF entry in instance network info cache for port 984010df-e5b5-45c2-9db5-f0046f5efd50. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 23 21:17:03 compute-1 nova_compute[230183]: 2025-11-23 21:17:03.551 230187 DEBUG nova.network.neutron [req-c0394ebe-1cec-485c-a6f2-0060a90b62ed req-bea66f36-f6d9-476a-ad06-c9549ba34201 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Updating instance_info_cache with network_info: [{"id": "984010df-e5b5-45c2-9db5-f0046f5efd50", "address": "fa:16:3e:63:db:14", "network": {"id": "45f4166e-7bc0-4981-9683-ade606fa5710", "bridge": "br-int", "label": "tempest-network-smoke--1927222341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap984010df-e5", "ovs_interfaceid": "984010df-e5b5-45c2-9db5-f0046f5efd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.563 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[548ea005-dfad-400d-954b-6f94a81a1f1f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45f4166e-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6a:a8:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 457211, 'reachable_time': 34322, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244152, 'error': None, 'target': 'ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:17:03 compute-1 nova_compute[230183]: 2025-11-23 21:17:03.566 230187 DEBUG oslo_concurrency.lockutils [req-c0394ebe-1cec-485c-a6f2-0060a90b62ed req-bea66f36-f6d9-476a-ad06-c9549ba34201 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.577 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[4e8e205f-5d61-463a-958d-e1ea444cc3c0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6a:a874'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 457211, 'tstamp': 457211}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244153, 'error': None, 'target': 'ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.595 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[a3ce28e9-9bd1-4b1f-acd5-2f6c827eadbd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45f4166e-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6a:a8:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 457211, 'reachable_time': 34322, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 244154, 'error': None, 'target': 'ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.621 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[187f315e-c52a-45b0-9bfa-94321ce5b526]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.668 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[9e8d9be4-3a95-4899-ac27-ee439073cd64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.669 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45f4166e-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.670 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.670 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap45f4166e-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:17:03 compute-1 nova_compute[230183]: 2025-11-23 21:17:03.671 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:03 compute-1 NetworkManager[49021]: <info>  [1763932623.6724] manager: (tap45f4166e-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Nov 23 21:17:03 compute-1 kernel: tap45f4166e-70: entered promiscuous mode
Nov 23 21:17:03 compute-1 nova_compute[230183]: 2025-11-23 21:17:03.674 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.675 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap45f4166e-70, col_values=(('external_ids', {'iface-id': '4d2b4219-31d6-45aa-9e4b-1dde83c9be1c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:17:03 compute-1 nova_compute[230183]: 2025-11-23 21:17:03.676 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:03 compute-1 ovn_controller[132845]: 2025-11-23T21:17:03Z|00127|binding|INFO|Releasing lport 4d2b4219-31d6-45aa-9e4b-1dde83c9be1c from this chassis (sb_readonly=0)
Nov 23 21:17:03 compute-1 nova_compute[230183]: 2025-11-23 21:17:03.701 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.702 142158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/45f4166e-7bc0-4981-9683-ade606fa5710.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/45f4166e-7bc0-4981-9683-ade606fa5710.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.703 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[6bfd96de-b5e9-497c-960d-48542e1dec0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.703 142158 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: global
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]:     log         /dev/log local0 debug
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]:     log-tag     haproxy-metadata-proxy-45f4166e-7bc0-4981-9683-ade606fa5710
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]:     user        root
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]:     group       root
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]:     maxconn     1024
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]:     pidfile     /var/lib/neutron/external/pids/45f4166e-7bc0-4981-9683-ade606fa5710.pid.haproxy
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]:     daemon
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: defaults
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]:     log global
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]:     mode http
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]:     option httplog
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]:     option dontlognull
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]:     option http-server-close
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]:     option forwardfor
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]:     retries                 3
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]:     timeout http-request    30s
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]:     timeout connect         30s
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]:     timeout client          32s
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]:     timeout server          32s
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]:     timeout http-keep-alive 30s
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: listen listener
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]:     bind 169.254.169.254:80
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]:     server metadata /var/lib/neutron/metadata_proxy
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]:     http-request add-header X-OVN-Network-ID 45f4166e-7bc0-4981-9683-ade606fa5710
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 23 21:17:03 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.704 142158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710', 'env', 'PROCESS_TAG=haproxy-45f4166e-7bc0-4981-9683-ade606fa5710', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/45f4166e-7bc0-4981-9683-ade606fa5710.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 23 21:17:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:03.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:03 compute-1 nova_compute[230183]: 2025-11-23 21:17:03.855 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932623.8549895, f638f2b4-bdf0-46c2-81d0-143511a01fb5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:17:03 compute-1 nova_compute[230183]: 2025-11-23 21:17:03.856 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] VM Started (Lifecycle Event)
Nov 23 21:17:03 compute-1 nova_compute[230183]: 2025-11-23 21:17:03.871 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:17:03 compute-1 nova_compute[230183]: 2025-11-23 21:17:03.874 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932623.8551562, f638f2b4-bdf0-46c2-81d0-143511a01fb5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:17:03 compute-1 nova_compute[230183]: 2025-11-23 21:17:03.875 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] VM Paused (Lifecycle Event)
Nov 23 21:17:03 compute-1 nova_compute[230183]: 2025-11-23 21:17:03.892 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:17:03 compute-1 nova_compute[230183]: 2025-11-23 21:17:03.895 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 23 21:17:03 compute-1 nova_compute[230183]: 2025-11-23 21:17:03.912 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 23 21:17:04 compute-1 podman[244229]: 2025-11-23 21:17:04.026968047 +0000 UTC m=+0.043616465 container create d1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 23 21:17:04 compute-1 systemd[1]: Started libpod-conmon-d1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80.scope.
Nov 23 21:17:04 compute-1 systemd[1]: Started libcrun container.
Nov 23 21:17:04 compute-1 nova_compute[230183]: 2025-11-23 21:17:04.081 230187 DEBUG nova.compute.manager [req-4df6a19b-52e0-4be5-9a35-3c404dd9f4b4 req-ec2592de-e81f-44d5-8776-779a52d8ae4c 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Received event network-vif-plugged-984010df-e5b5-45c2-9db5-f0046f5efd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:17:04 compute-1 nova_compute[230183]: 2025-11-23 21:17:04.082 230187 DEBUG oslo_concurrency.lockutils [req-4df6a19b-52e0-4be5-9a35-3c404dd9f4b4 req-ec2592de-e81f-44d5-8776-779a52d8ae4c 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:17:04 compute-1 nova_compute[230183]: 2025-11-23 21:17:04.082 230187 DEBUG oslo_concurrency.lockutils [req-4df6a19b-52e0-4be5-9a35-3c404dd9f4b4 req-ec2592de-e81f-44d5-8776-779a52d8ae4c 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:17:04 compute-1 nova_compute[230183]: 2025-11-23 21:17:04.082 230187 DEBUG oslo_concurrency.lockutils [req-4df6a19b-52e0-4be5-9a35-3c404dd9f4b4 req-ec2592de-e81f-44d5-8776-779a52d8ae4c 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:17:04 compute-1 nova_compute[230183]: 2025-11-23 21:17:04.082 230187 DEBUG nova.compute.manager [req-4df6a19b-52e0-4be5-9a35-3c404dd9f4b4 req-ec2592de-e81f-44d5-8776-779a52d8ae4c 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Processing event network-vif-plugged-984010df-e5b5-45c2-9db5-f0046f5efd50 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 23 21:17:04 compute-1 nova_compute[230183]: 2025-11-23 21:17:04.083 230187 DEBUG nova.compute.manager [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 23 21:17:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c03b0f65c36e2efcc867601f87a616208b57ce73396437f2aca52a4ea44641ae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 21:17:04 compute-1 nova_compute[230183]: 2025-11-23 21:17:04.086 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932624.0864298, f638f2b4-bdf0-46c2-81d0-143511a01fb5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:17:04 compute-1 nova_compute[230183]: 2025-11-23 21:17:04.087 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] VM Resumed (Lifecycle Event)
Nov 23 21:17:04 compute-1 nova_compute[230183]: 2025-11-23 21:17:04.088 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 23 21:17:04 compute-1 nova_compute[230183]: 2025-11-23 21:17:04.091 230187 INFO nova.virt.libvirt.driver [-] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Instance spawned successfully.
Nov 23 21:17:04 compute-1 nova_compute[230183]: 2025-11-23 21:17:04.091 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 23 21:17:04 compute-1 podman[244229]: 2025-11-23 21:17:04.003406148 +0000 UTC m=+0.020054586 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 21:17:04 compute-1 podman[244229]: 2025-11-23 21:17:04.101209439 +0000 UTC m=+0.117857887 container init d1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 23 21:17:04 compute-1 podman[244229]: 2025-11-23 21:17:04.106052227 +0000 UTC m=+0.122700665 container start d1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 21:17:04 compute-1 nova_compute[230183]: 2025-11-23 21:17:04.108 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:17:04 compute-1 nova_compute[230183]: 2025-11-23 21:17:04.114 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 23 21:17:04 compute-1 nova_compute[230183]: 2025-11-23 21:17:04.118 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:17:04 compute-1 nova_compute[230183]: 2025-11-23 21:17:04.118 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:17:04 compute-1 nova_compute[230183]: 2025-11-23 21:17:04.119 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:17:04 compute-1 nova_compute[230183]: 2025-11-23 21:17:04.119 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:17:04 compute-1 nova_compute[230183]: 2025-11-23 21:17:04.119 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:17:04 compute-1 nova_compute[230183]: 2025-11-23 21:17:04.120 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 23 21:17:04 compute-1 neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710[244243]: [NOTICE]   (244247) : New worker (244249) forked
Nov 23 21:17:04 compute-1 neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710[244243]: [NOTICE]   (244247) : Loading success.
Nov 23 21:17:04 compute-1 nova_compute[230183]: 2025-11-23 21:17:04.148 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 23 21:17:04 compute-1 nova_compute[230183]: 2025-11-23 21:17:04.178 230187 INFO nova.compute.manager [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Took 8.81 seconds to spawn the instance on the hypervisor.
Nov 23 21:17:04 compute-1 nova_compute[230183]: 2025-11-23 21:17:04.178 230187 DEBUG nova.compute.manager [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:17:04 compute-1 nova_compute[230183]: 2025-11-23 21:17:04.235 230187 INFO nova.compute.manager [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Took 9.71 seconds to build instance.
Nov 23 21:17:04 compute-1 nova_compute[230183]: 2025-11-23 21:17:04.251 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:17:04 compute-1 ceph-mon[80135]: pgmap v1070: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 23 21:17:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:17:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:04.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:17:05 compute-1 nova_compute[230183]: 2025-11-23 21:17:05.445 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:17:05 compute-1 nova_compute[230183]: 2025-11-23 21:17:05.446 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:17:05 compute-1 nova_compute[230183]: 2025-11-23 21:17:05.446 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:17:05 compute-1 nova_compute[230183]: 2025-11-23 21:17:05.446 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 23 21:17:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:05.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:06 compute-1 nova_compute[230183]: 2025-11-23 21:17:06.184 230187 DEBUG nova.compute.manager [req-a184cfbb-3f6e-442f-b5ed-316c9e458ad9 req-4a7dfad3-8087-45ab-a942-1e9b44f61e5f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Received event network-vif-plugged-984010df-e5b5-45c2-9db5-f0046f5efd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:17:06 compute-1 nova_compute[230183]: 2025-11-23 21:17:06.184 230187 DEBUG oslo_concurrency.lockutils [req-a184cfbb-3f6e-442f-b5ed-316c9e458ad9 req-4a7dfad3-8087-45ab-a942-1e9b44f61e5f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:17:06 compute-1 nova_compute[230183]: 2025-11-23 21:17:06.185 230187 DEBUG oslo_concurrency.lockutils [req-a184cfbb-3f6e-442f-b5ed-316c9e458ad9 req-4a7dfad3-8087-45ab-a942-1e9b44f61e5f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:17:06 compute-1 nova_compute[230183]: 2025-11-23 21:17:06.185 230187 DEBUG oslo_concurrency.lockutils [req-a184cfbb-3f6e-442f-b5ed-316c9e458ad9 req-4a7dfad3-8087-45ab-a942-1e9b44f61e5f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:17:06 compute-1 nova_compute[230183]: 2025-11-23 21:17:06.186 230187 DEBUG nova.compute.manager [req-a184cfbb-3f6e-442f-b5ed-316c9e458ad9 req-4a7dfad3-8087-45ab-a942-1e9b44f61e5f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] No waiting events found dispatching network-vif-plugged-984010df-e5b5-45c2-9db5-f0046f5efd50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:17:06 compute-1 nova_compute[230183]: 2025-11-23 21:17:06.186 230187 WARNING nova.compute.manager [req-a184cfbb-3f6e-442f-b5ed-316c9e458ad9 req-4a7dfad3-8087-45ab-a942-1e9b44f61e5f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Received unexpected event network-vif-plugged-984010df-e5b5-45c2-9db5-f0046f5efd50 for instance with vm_state active and task_state None.
Nov 23 21:17:06 compute-1 nova_compute[230183]: 2025-11-23 21:17:06.450 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:17:06 compute-1 nova_compute[230183]: 2025-11-23 21:17:06.476 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:17:06 compute-1 nova_compute[230183]: 2025-11-23 21:17:06.476 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:17:06 compute-1 nova_compute[230183]: 2025-11-23 21:17:06.477 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:17:06 compute-1 nova_compute[230183]: 2025-11-23 21:17:06.477 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:17:06 compute-1 nova_compute[230183]: 2025-11-23 21:17:06.477 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:17:06 compute-1 ceph-mon[80135]: pgmap v1071: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 23 21:17:06 compute-1 podman[244260]: 2025-11-23 21:17:06.639701772 +0000 UTC m=+0.057948319 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 23 21:17:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:17:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:06.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:17:06 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:17:06 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3363301861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:17:06 compute-1 nova_compute[230183]: 2025-11-23 21:17:06.898 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:17:06 compute-1 sudo[244300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:17:06 compute-1 sudo[244300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:17:06 compute-1 sudo[244300]: pam_unix(sudo:session): session closed for user root
Nov 23 21:17:06 compute-1 nova_compute[230183]: 2025-11-23 21:17:06.994 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 23 21:17:06 compute-1 nova_compute[230183]: 2025-11-23 21:17:06.995 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 23 21:17:07 compute-1 nova_compute[230183]: 2025-11-23 21:17:07.160 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:17:07 compute-1 nova_compute[230183]: 2025-11-23 21:17:07.162 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4743MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:17:07 compute-1 nova_compute[230183]: 2025-11-23 21:17:07.162 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:17:07 compute-1 nova_compute[230183]: 2025-11-23 21:17:07.163 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:17:07 compute-1 nova_compute[230183]: 2025-11-23 21:17:07.328 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Instance f638f2b4-bdf0-46c2-81d0-143511a01fb5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 23 21:17:07 compute-1 nova_compute[230183]: 2025-11-23 21:17:07.329 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:17:07 compute-1 nova_compute[230183]: 2025-11-23 21:17:07.329 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:17:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:17:07 compute-1 nova_compute[230183]: 2025-11-23 21:17:07.431 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing inventories for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 23 21:17:07 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3363301861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:17:07 compute-1 nova_compute[230183]: 2025-11-23 21:17:07.535 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updating ProviderTree inventory for provider bb217351-d4c8-44a4-9137-08393a1f72bc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 23 21:17:07 compute-1 nova_compute[230183]: 2025-11-23 21:17:07.535 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updating inventory in ProviderTree for provider bb217351-d4c8-44a4-9137-08393a1f72bc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 23 21:17:07 compute-1 nova_compute[230183]: 2025-11-23 21:17:07.538 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:07 compute-1 nova_compute[230183]: 2025-11-23 21:17:07.555 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing aggregate associations for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 23 21:17:07 compute-1 nova_compute[230183]: 2025-11-23 21:17:07.579 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing trait associations for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc, traits: COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,HW_CPU_X86_AVX,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SVM,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 23 21:17:07 compute-1 nova_compute[230183]: 2025-11-23 21:17:07.627 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:17:07 compute-1 nova_compute[230183]: 2025-11-23 21:17:07.699 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:07.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:08 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:17:08 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2477343784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:17:08 compute-1 nova_compute[230183]: 2025-11-23 21:17:08.088 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:17:08 compute-1 nova_compute[230183]: 2025-11-23 21:17:08.093 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:17:08 compute-1 nova_compute[230183]: 2025-11-23 21:17:08.108 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:17:08 compute-1 nova_compute[230183]: 2025-11-23 21:17:08.130 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 23 21:17:08 compute-1 nova_compute[230183]: 2025-11-23 21:17:08.130 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.968s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:17:08 compute-1 nova_compute[230183]: 2025-11-23 21:17:08.131 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:17:08 compute-1 nova_compute[230183]: 2025-11-23 21:17:08.131 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 23 21:17:08 compute-1 nova_compute[230183]: 2025-11-23 21:17:08.143 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 23 21:17:08 compute-1 ceph-mon[80135]: pgmap v1072: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Nov 23 21:17:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/3981719019' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:17:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/3981719019' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:17:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2477343784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:17:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:17:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:08.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:17:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:09.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:10 compute-1 nova_compute[230183]: 2025-11-23 21:17:10.120 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:17:10 compute-1 nova_compute[230183]: 2025-11-23 21:17:10.121 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:17:10 compute-1 nova_compute[230183]: 2025-11-23 21:17:10.122 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:17:10 compute-1 nova_compute[230183]: 2025-11-23 21:17:10.122 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:17:10 compute-1 nova_compute[230183]: 2025-11-23 21:17:10.122 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 23 21:17:10 compute-1 nova_compute[230183]: 2025-11-23 21:17:10.428 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:17:10 compute-1 nova_compute[230183]: 2025-11-23 21:17:10.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 23 21:17:10 compute-1 nova_compute[230183]: 2025-11-23 21:17:10.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 23 21:17:10 compute-1 ceph-mon[80135]: pgmap v1073: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Nov 23 21:17:10 compute-1 nova_compute[230183]: 2025-11-23 21:17:10.666 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:17:10 compute-1 nova_compute[230183]: 2025-11-23 21:17:10.667 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquired lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:17:10 compute-1 nova_compute[230183]: 2025-11-23 21:17:10.668 230187 DEBUG nova.network.neutron [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 23 21:17:10 compute-1 nova_compute[230183]: 2025-11-23 21:17:10.669 230187 DEBUG nova.objects.instance [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lazy-loading 'info_cache' on Instance uuid f638f2b4-bdf0-46c2-81d0-143511a01fb5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:17:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:17:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:10.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:17:11 compute-1 ovn_controller[132845]: 2025-11-23T21:17:11Z|00128|binding|INFO|Releasing lport 4d2b4219-31d6-45aa-9e4b-1dde83c9be1c from this chassis (sb_readonly=0)
Nov 23 21:17:11 compute-1 NetworkManager[49021]: <info>  [1763932631.0485] manager: (patch-br-int-to-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Nov 23 21:17:11 compute-1 NetworkManager[49021]: <info>  [1763932631.0491] manager: (patch-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Nov 23 21:17:11 compute-1 nova_compute[230183]: 2025-11-23 21:17:11.063 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:11 compute-1 ovn_controller[132845]: 2025-11-23T21:17:11Z|00129|binding|INFO|Releasing lport 4d2b4219-31d6-45aa-9e4b-1dde83c9be1c from this chassis (sb_readonly=0)
Nov 23 21:17:11 compute-1 nova_compute[230183]: 2025-11-23 21:17:11.083 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:11 compute-1 nova_compute[230183]: 2025-11-23 21:17:11.089 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:11 compute-1 nova_compute[230183]: 2025-11-23 21:17:11.599 230187 DEBUG nova.compute.manager [req-6c1f1583-7138-4c3b-8fd9-faa660d1ca20 req-f6298826-f633-45c0-bc0d-6ad2cef5b4f4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Received event network-changed-984010df-e5b5-45c2-9db5-f0046f5efd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:17:11 compute-1 nova_compute[230183]: 2025-11-23 21:17:11.600 230187 DEBUG nova.compute.manager [req-6c1f1583-7138-4c3b-8fd9-faa660d1ca20 req-f6298826-f633-45c0-bc0d-6ad2cef5b4f4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Refreshing instance network info cache due to event network-changed-984010df-e5b5-45c2-9db5-f0046f5efd50. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 23 21:17:11 compute-1 nova_compute[230183]: 2025-11-23 21:17:11.600 230187 DEBUG oslo_concurrency.lockutils [req-6c1f1583-7138-4c3b-8fd9-faa660d1ca20 req-f6298826-f633-45c0-bc0d-6ad2cef5b4f4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:17:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:11.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:17:12 compute-1 nova_compute[230183]: 2025-11-23 21:17:12.540 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:12 compute-1 nova_compute[230183]: 2025-11-23 21:17:12.701 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:12 compute-1 ceph-mon[80135]: pgmap v1074: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 160 KiB/s wr, 87 op/s
Nov 23 21:17:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:17:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:12.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:17:13 compute-1 nova_compute[230183]: 2025-11-23 21:17:13.110 230187 DEBUG nova.network.neutron [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Updating instance_info_cache with network_info: [{"id": "984010df-e5b5-45c2-9db5-f0046f5efd50", "address": "fa:16:3e:63:db:14", "network": {"id": "45f4166e-7bc0-4981-9683-ade606fa5710", "bridge": "br-int", "label": "tempest-network-smoke--1927222341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap984010df-e5", "ovs_interfaceid": "984010df-e5b5-45c2-9db5-f0046f5efd50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:17:13 compute-1 nova_compute[230183]: 2025-11-23 21:17:13.132 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Releasing lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:17:13 compute-1 nova_compute[230183]: 2025-11-23 21:17:13.132 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 23 21:17:13 compute-1 nova_compute[230183]: 2025-11-23 21:17:13.132 230187 DEBUG oslo_concurrency.lockutils [req-6c1f1583-7138-4c3b-8fd9-faa660d1ca20 req-f6298826-f633-45c0-bc0d-6ad2cef5b4f4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:17:13 compute-1 nova_compute[230183]: 2025-11-23 21:17:13.133 230187 DEBUG nova.network.neutron [req-6c1f1583-7138-4c3b-8fd9-faa660d1ca20 req-f6298826-f633-45c0-bc0d-6ad2cef5b4f4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Refreshing network info cache for port 984010df-e5b5-45c2-9db5-f0046f5efd50 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 23 21:17:13 compute-1 nova_compute[230183]: 2025-11-23 21:17:13.134 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:17:13 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2950651412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:17:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:13.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:14 compute-1 nova_compute[230183]: 2025-11-23 21:17:14.145 230187 DEBUG nova.network.neutron [req-6c1f1583-7138-4c3b-8fd9-faa660d1ca20 req-f6298826-f633-45c0-bc0d-6ad2cef5b4f4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Updated VIF entry in instance network info cache for port 984010df-e5b5-45c2-9db5-f0046f5efd50. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 23 21:17:14 compute-1 nova_compute[230183]: 2025-11-23 21:17:14.145 230187 DEBUG nova.network.neutron [req-6c1f1583-7138-4c3b-8fd9-faa660d1ca20 req-f6298826-f633-45c0-bc0d-6ad2cef5b4f4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Updating instance_info_cache with network_info: [{"id": "984010df-e5b5-45c2-9db5-f0046f5efd50", "address": "fa:16:3e:63:db:14", "network": {"id": "45f4166e-7bc0-4981-9683-ade606fa5710", "bridge": "br-int", "label": "tempest-network-smoke--1927222341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap984010df-e5", "ovs_interfaceid": "984010df-e5b5-45c2-9db5-f0046f5efd50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:17:14 compute-1 nova_compute[230183]: 2025-11-23 21:17:14.156 230187 DEBUG oslo_concurrency.lockutils [req-6c1f1583-7138-4c3b-8fd9-faa660d1ca20 req-f6298826-f633-45c0-bc0d-6ad2cef5b4f4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:17:14 compute-1 ceph-mon[80135]: pgmap v1075: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 23 21:17:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2299030320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:17:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:17:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:14.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:17:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:17:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:15.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:17:16 compute-1 ceph-mon[80135]: pgmap v1076: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Nov 23 21:17:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:16.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:17:17 compute-1 nova_compute[230183]: 2025-11-23 21:17:17.543 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:17 compute-1 nova_compute[230183]: 2025-11-23 21:17:17.703 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:17.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:18 compute-1 ovn_controller[132845]: 2025-11-23T21:17:18Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:63:db:14 10.100.0.10
Nov 23 21:17:18 compute-1 ovn_controller[132845]: 2025-11-23T21:17:18Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:63:db:14 10.100.0.10
Nov 23 21:17:18 compute-1 ceph-mon[80135]: pgmap v1077: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Nov 23 21:17:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:17:18 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/738017921' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:17:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:18.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:19 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3964643656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:17:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:19.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:20 compute-1 ceph-mon[80135]: pgmap v1078: 337 pgs: 337 active+clean; 115 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 112 op/s
Nov 23 21:17:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:20.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:17:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:21.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:17:22 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:17:22 compute-1 nova_compute[230183]: 2025-11-23 21:17:22.545 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:22 compute-1 nova_compute[230183]: 2025-11-23 21:17:22.763 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:22 compute-1 ceph-mon[80135]: pgmap v1079: 337 pgs: 337 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 519 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Nov 23 21:17:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:22.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:23 compute-1 ceph-mon[80135]: pgmap v1080: 337 pgs: 337 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 23 21:17:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:17:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:23.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:17:23 compute-1 nova_compute[230183]: 2025-11-23 21:17:23.966 230187 INFO nova.compute.manager [None req-57d5e31a-c96e-4edd-9c12-886709572b81 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Get console output
Nov 23 21:17:23 compute-1 nova_compute[230183]: 2025-11-23 21:17:23.972 234120 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 23 21:17:24 compute-1 ovn_controller[132845]: 2025-11-23T21:17:24Z|00130|binding|INFO|Releasing lport 4d2b4219-31d6-45aa-9e4b-1dde83c9be1c from this chassis (sb_readonly=0)
Nov 23 21:17:24 compute-1 nova_compute[230183]: 2025-11-23 21:17:24.741 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:17:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:24.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:17:24 compute-1 ovn_controller[132845]: 2025-11-23T21:17:24Z|00131|binding|INFO|Releasing lport 4d2b4219-31d6-45aa-9e4b-1dde83c9be1c from this chassis (sb_readonly=0)
Nov 23 21:17:24 compute-1 nova_compute[230183]: 2025-11-23 21:17:24.837 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:25.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:25 compute-1 nova_compute[230183]: 2025-11-23 21:17:25.967 230187 INFO nova.compute.manager [None req-97e4cc55-5bb6-4a78-9dc9-232dc0466ff5 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Get console output
Nov 23 21:17:25 compute-1 nova_compute[230183]: 2025-11-23 21:17:25.971 234120 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 23 21:17:26 compute-1 sudo[244360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:17:26 compute-1 sudo[244360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:17:26 compute-1 sudo[244360]: pam_unix(sudo:session): session closed for user root
Nov 23 21:17:26 compute-1 sudo[244385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 21:17:26 compute-1 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 21:17:26 compute-1 sudo[244385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:17:26 compute-1 ceph-mon[80135]: pgmap v1081: 337 pgs: 337 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 23 21:17:26 compute-1 sudo[244385]: pam_unix(sudo:session): session closed for user root
Nov 23 21:17:26 compute-1 nova_compute[230183]: 2025-11-23 21:17:26.742 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:26 compute-1 NetworkManager[49021]: <info>  [1763932646.7435] manager: (patch-br-int-to-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Nov 23 21:17:26 compute-1 NetworkManager[49021]: <info>  [1763932646.7444] manager: (patch-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Nov 23 21:17:26 compute-1 ovn_controller[132845]: 2025-11-23T21:17:26Z|00132|binding|INFO|Releasing lport 4d2b4219-31d6-45aa-9e4b-1dde83c9be1c from this chassis (sb_readonly=0)
Nov 23 21:17:26 compute-1 nova_compute[230183]: 2025-11-23 21:17:26.795 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:26 compute-1 nova_compute[230183]: 2025-11-23 21:17:26.801 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:17:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:26.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:17:27 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:27.028 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:17:27 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:27.030 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 23 21:17:27 compute-1 nova_compute[230183]: 2025-11-23 21:17:27.066 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:27 compute-1 sudo[244441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:17:27 compute-1 sudo[244441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:17:27 compute-1 sudo[244441]: pam_unix(sudo:session): session closed for user root
Nov 23 21:17:27 compute-1 nova_compute[230183]: 2025-11-23 21:17:27.106 230187 INFO nova.compute.manager [None req-5f8abde0-b8c2-4923-b872-2bd8b2d7a13f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Get console output
Nov 23 21:17:27 compute-1 nova_compute[230183]: 2025-11-23 21:17:27.110 234120 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 23 21:17:27 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:17:27 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 21:17:27 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:17:27 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:17:27 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 21:17:27 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 21:17:27 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:17:27 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:17:27 compute-1 nova_compute[230183]: 2025-11-23 21:17:27.547 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:27 compute-1 nova_compute[230183]: 2025-11-23 21:17:27.765 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:17:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:27.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:17:27 compute-1 nova_compute[230183]: 2025-11-23 21:17:27.982 230187 DEBUG nova.compute.manager [req-1908fb14-e346-4176-89c5-04dbbbe37355 req-f45f3cff-fc0a-44f9-a4b7-5f1c9e56d068 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Received event network-changed-984010df-e5b5-45c2-9db5-f0046f5efd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:17:27 compute-1 nova_compute[230183]: 2025-11-23 21:17:27.982 230187 DEBUG nova.compute.manager [req-1908fb14-e346-4176-89c5-04dbbbe37355 req-f45f3cff-fc0a-44f9-a4b7-5f1c9e56d068 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Refreshing instance network info cache due to event network-changed-984010df-e5b5-45c2-9db5-f0046f5efd50. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 23 21:17:27 compute-1 nova_compute[230183]: 2025-11-23 21:17:27.983 230187 DEBUG oslo_concurrency.lockutils [req-1908fb14-e346-4176-89c5-04dbbbe37355 req-f45f3cff-fc0a-44f9-a4b7-5f1c9e56d068 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 23 21:17:27 compute-1 nova_compute[230183]: 2025-11-23 21:17:27.983 230187 DEBUG oslo_concurrency.lockutils [req-1908fb14-e346-4176-89c5-04dbbbe37355 req-f45f3cff-fc0a-44f9-a4b7-5f1c9e56d068 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 23 21:17:27 compute-1 nova_compute[230183]: 2025-11-23 21:17:27.983 230187 DEBUG nova.network.neutron [req-1908fb14-e346-4176-89c5-04dbbbe37355 req-f45f3cff-fc0a-44f9-a4b7-5f1c9e56d068 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Refreshing network info cache for port 984010df-e5b5-45c2-9db5-f0046f5efd50 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.042 230187 DEBUG oslo_concurrency.lockutils [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.043 230187 DEBUG oslo_concurrency.lockutils [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.043 230187 DEBUG oslo_concurrency.lockutils [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.043 230187 DEBUG oslo_concurrency.lockutils [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.043 230187 DEBUG oslo_concurrency.lockutils [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.045 230187 INFO nova.compute.manager [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Terminating instance
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.046 230187 DEBUG nova.compute.manager [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 23 21:17:28 compute-1 kernel: tap984010df-e5 (unregistering): left promiscuous mode
Nov 23 21:17:28 compute-1 NetworkManager[49021]: <info>  [1763932648.1083] device (tap984010df-e5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 23 21:17:28 compute-1 ovn_controller[132845]: 2025-11-23T21:17:28Z|00133|binding|INFO|Releasing lport 984010df-e5b5-45c2-9db5-f0046f5efd50 from this chassis (sb_readonly=0)
Nov 23 21:17:28 compute-1 ovn_controller[132845]: 2025-11-23T21:17:28Z|00134|binding|INFO|Setting lport 984010df-e5b5-45c2-9db5-f0046f5efd50 down in Southbound
Nov 23 21:17:28 compute-1 ovn_controller[132845]: 2025-11-23T21:17:28Z|00135|binding|INFO|Removing iface tap984010df-e5 ovn-installed in OVS
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.151 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.155 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:28 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.161 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:db:14 10.100.0.10'], port_security=['fa:16:3e:63:db:14 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f638f2b4-bdf0-46c2-81d0-143511a01fb5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45f4166e-7bc0-4981-9683-ade606fa5710', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ba908e3d-1310-4719-83e3-3b0a3d387de5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84c02252-eea5-46a3-9f52-20439e666f31, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=984010df-e5b5-45c2-9db5-f0046f5efd50) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:17:28 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.162 142158 INFO neutron.agent.ovn.metadata.agent [-] Port 984010df-e5b5-45c2-9db5-f0046f5efd50 in datapath 45f4166e-7bc0-4981-9683-ade606fa5710 unbound from our chassis
Nov 23 21:17:28 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.163 142158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 45f4166e-7bc0-4981-9683-ade606fa5710, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 23 21:17:28 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.164 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[dd0f0084-28ec-4f28-ba2c-986a610fa243]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:17:28 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.164 142158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710 namespace which is not needed anymore
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.167 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:28 compute-1 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Nov 23 21:17:28 compute-1 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000000d.scope: Consumed 13.268s CPU time.
Nov 23 21:17:28 compute-1 systemd-machined[193469]: Machine qemu-8-instance-0000000d terminated.
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.279 230187 INFO nova.virt.libvirt.driver [-] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Instance destroyed successfully.
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.280 230187 DEBUG nova.objects.instance [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'resources' on Instance uuid f638f2b4-bdf0-46c2-81d0-143511a01fb5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 23 21:17:28 compute-1 neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710[244243]: [NOTICE]   (244247) : haproxy version is 2.8.14-c23fe91
Nov 23 21:17:28 compute-1 neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710[244243]: [NOTICE]   (244247) : path to executable is /usr/sbin/haproxy
Nov 23 21:17:28 compute-1 neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710[244243]: [ALERT]    (244247) : Current worker (244249) exited with code 143 (Terminated)
Nov 23 21:17:28 compute-1 neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710[244243]: [WARNING]  (244247) : All workers exited. Exiting... (0)
Nov 23 21:17:28 compute-1 systemd[1]: libpod-d1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80.scope: Deactivated successfully.
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.288 230187 DEBUG nova.virt.libvirt.vif [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:16:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-793817431',display_name='tempest-TestNetworkBasicOps-server-793817431',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-793817431',id=13,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFyCHalitTYHY+i3k7NGtIz/axejAHzuAlVnR4e5KMHIjAE7Fj+3ovJsaUKuZw9NPKsJ0qVqgikm8FkvL2Pu0+xYGcJBA97J85NKDWDS+eoNhScnnixkt+4uoxHyqB5n7A==',key_name='tempest-TestNetworkBasicOps-1599562746',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:17:04Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-gf1xk21n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:17:04Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=f638f2b4-bdf0-46c2-81d0-143511a01fb5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "984010df-e5b5-45c2-9db5-f0046f5efd50", "address": "fa:16:3e:63:db:14", "network": {"id": "45f4166e-7bc0-4981-9683-ade606fa5710", "bridge": "br-int", "label": "tempest-network-smoke--1927222341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap984010df-e5", "ovs_interfaceid": "984010df-e5b5-45c2-9db5-f0046f5efd50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.289 230187 DEBUG nova.network.os_vif_util [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "984010df-e5b5-45c2-9db5-f0046f5efd50", "address": "fa:16:3e:63:db:14", "network": {"id": "45f4166e-7bc0-4981-9683-ade606fa5710", "bridge": "br-int", "label": "tempest-network-smoke--1927222341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap984010df-e5", "ovs_interfaceid": "984010df-e5b5-45c2-9db5-f0046f5efd50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.290 230187 DEBUG nova.network.os_vif_util [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:63:db:14,bridge_name='br-int',has_traffic_filtering=True,id=984010df-e5b5-45c2-9db5-f0046f5efd50,network=Network(45f4166e-7bc0-4981-9683-ade606fa5710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap984010df-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.291 230187 DEBUG os_vif [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:db:14,bridge_name='br-int',has_traffic_filtering=True,id=984010df-e5b5-45c2-9db5-f0046f5efd50,network=Network(45f4166e-7bc0-4981-9683-ade606fa5710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap984010df-e5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.292 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.292 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap984010df-e5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.294 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.295 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:28 compute-1 podman[244491]: 2025-11-23 21:17:28.297126356 +0000 UTC m=+0.050676313 container died d1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.298 230187 INFO os_vif [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:db:14,bridge_name='br-int',has_traffic_filtering=True,id=984010df-e5b5-45c2-9db5-f0046f5efd50,network=Network(45f4166e-7bc0-4981-9683-ade606fa5710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap984010df-e5')
Nov 23 21:17:28 compute-1 ceph-mon[80135]: pgmap v1082: 337 pgs: 337 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 312 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Nov 23 21:17:28 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80-userdata-shm.mount: Deactivated successfully.
Nov 23 21:17:28 compute-1 systemd[1]: var-lib-containers-storage-overlay-c03b0f65c36e2efcc867601f87a616208b57ce73396437f2aca52a4ea44641ae-merged.mount: Deactivated successfully.
Nov 23 21:17:28 compute-1 podman[244491]: 2025-11-23 21:17:28.340431002 +0000 UTC m=+0.093980939 container cleanup d1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 21:17:28 compute-1 systemd[1]: libpod-conmon-d1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80.scope: Deactivated successfully.
Nov 23 21:17:28 compute-1 podman[244552]: 2025-11-23 21:17:28.398297657 +0000 UTC m=+0.038689445 container remove d1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 23 21:17:28 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.403 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[c4851907-0083-4365-ba9f-797e11ad3902]: (4, ('Sun Nov 23 09:17:28 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710 (d1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80)\nd1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80\nSun Nov 23 09:17:28 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710 (d1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80)\nd1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:17:28 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.405 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5a4d052e-c3f0-4ff9-bce8-cb836caed1e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:17:28 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.406 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45f4166e-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.407 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:28 compute-1 kernel: tap45f4166e-70: left promiscuous mode
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.420 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:28 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.423 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[7d2f7e0a-a01b-4728-ba7b-7c74436acc2f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:17:28 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.447 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb33155-3deb-41f1-b182-8b24c95c5622]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:17:28 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.448 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[0266d81f-0793-4dfd-a02b-260ddaea32eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:17:28 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.462 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[c6f7b0a8-cf22-41f1-9917-8d8e03e1d1c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 457205, 'reachable_time': 37074, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244565, 'error': None, 'target': 'ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:17:28 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.464 142272 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 23 21:17:28 compute-1 systemd[1]: run-netns-ovnmeta\x2d45f4166e\x2d7bc0\x2d4981\x2d9683\x2dade606fa5710.mount: Deactivated successfully.
Nov 23 21:17:28 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.464 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[ac6f2fce-8524-4d55-b3db-0082dd6addd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.733 230187 INFO nova.virt.libvirt.driver [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Deleting instance files /var/lib/nova/instances/f638f2b4-bdf0-46c2-81d0-143511a01fb5_del
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.734 230187 INFO nova.virt.libvirt.driver [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Deletion of /var/lib/nova/instances/f638f2b4-bdf0-46c2-81d0-143511a01fb5_del complete
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.792 230187 INFO nova.compute.manager [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Took 0.75 seconds to destroy the instance on the hypervisor.
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.792 230187 DEBUG oslo.service.loopingcall [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.792 230187 DEBUG nova.compute.manager [-] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 23 21:17:28 compute-1 nova_compute[230183]: 2025-11-23 21:17:28.792 230187 DEBUG nova.network.neutron [-] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 23 21:17:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:28.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:29 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:29.031 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:17:29 compute-1 nova_compute[230183]: 2025-11-23 21:17:29.217 230187 DEBUG nova.network.neutron [req-1908fb14-e346-4176-89c5-04dbbbe37355 req-f45f3cff-fc0a-44f9-a4b7-5f1c9e56d068 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Updated VIF entry in instance network info cache for port 984010df-e5b5-45c2-9db5-f0046f5efd50. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 23 21:17:29 compute-1 nova_compute[230183]: 2025-11-23 21:17:29.218 230187 DEBUG nova.network.neutron [req-1908fb14-e346-4176-89c5-04dbbbe37355 req-f45f3cff-fc0a-44f9-a4b7-5f1c9e56d068 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Updating instance_info_cache with network_info: [{"id": "984010df-e5b5-45c2-9db5-f0046f5efd50", "address": "fa:16:3e:63:db:14", "network": {"id": "45f4166e-7bc0-4981-9683-ade606fa5710", "bridge": "br-int", "label": "tempest-network-smoke--1927222341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap984010df-e5", "ovs_interfaceid": "984010df-e5b5-45c2-9db5-f0046f5efd50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:17:29 compute-1 nova_compute[230183]: 2025-11-23 21:17:29.241 230187 DEBUG oslo_concurrency.lockutils [req-1908fb14-e346-4176-89c5-04dbbbe37355 req-f45f3cff-fc0a-44f9-a4b7-5f1c9e56d068 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 23 21:17:29 compute-1 nova_compute[230183]: 2025-11-23 21:17:29.455 230187 DEBUG nova.network.neutron [-] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 23 21:17:29 compute-1 nova_compute[230183]: 2025-11-23 21:17:29.468 230187 INFO nova.compute.manager [-] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Took 0.68 seconds to deallocate network for instance.
Nov 23 21:17:29 compute-1 nova_compute[230183]: 2025-11-23 21:17:29.505 230187 DEBUG oslo_concurrency.lockutils [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:17:29 compute-1 nova_compute[230183]: 2025-11-23 21:17:29.505 230187 DEBUG oslo_concurrency.lockutils [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:17:29 compute-1 nova_compute[230183]: 2025-11-23 21:17:29.558 230187 DEBUG oslo_concurrency.processutils [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:17:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:29.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:30 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:17:30 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3313607607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:17:30 compute-1 nova_compute[230183]: 2025-11-23 21:17:30.060 230187 DEBUG nova.compute.manager [req-6a37eb5a-a1c1-44de-81d2-a23f9b2ac978 req-9af17b8e-dbe0-401a-879a-a642ae8eed74 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Received event network-vif-unplugged-984010df-e5b5-45c2-9db5-f0046f5efd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:17:30 compute-1 nova_compute[230183]: 2025-11-23 21:17:30.061 230187 DEBUG oslo_concurrency.lockutils [req-6a37eb5a-a1c1-44de-81d2-a23f9b2ac978 req-9af17b8e-dbe0-401a-879a-a642ae8eed74 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:17:30 compute-1 nova_compute[230183]: 2025-11-23 21:17:30.061 230187 DEBUG oslo_concurrency.lockutils [req-6a37eb5a-a1c1-44de-81d2-a23f9b2ac978 req-9af17b8e-dbe0-401a-879a-a642ae8eed74 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:17:30 compute-1 nova_compute[230183]: 2025-11-23 21:17:30.062 230187 DEBUG oslo_concurrency.lockutils [req-6a37eb5a-a1c1-44de-81d2-a23f9b2ac978 req-9af17b8e-dbe0-401a-879a-a642ae8eed74 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:17:30 compute-1 nova_compute[230183]: 2025-11-23 21:17:30.062 230187 DEBUG nova.compute.manager [req-6a37eb5a-a1c1-44de-81d2-a23f9b2ac978 req-9af17b8e-dbe0-401a-879a-a642ae8eed74 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] No waiting events found dispatching network-vif-unplugged-984010df-e5b5-45c2-9db5-f0046f5efd50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:17:30 compute-1 nova_compute[230183]: 2025-11-23 21:17:30.062 230187 WARNING nova.compute.manager [req-6a37eb5a-a1c1-44de-81d2-a23f9b2ac978 req-9af17b8e-dbe0-401a-879a-a642ae8eed74 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Received unexpected event network-vif-unplugged-984010df-e5b5-45c2-9db5-f0046f5efd50 for instance with vm_state deleted and task_state None.
Nov 23 21:17:30 compute-1 nova_compute[230183]: 2025-11-23 21:17:30.062 230187 DEBUG nova.compute.manager [req-6a37eb5a-a1c1-44de-81d2-a23f9b2ac978 req-9af17b8e-dbe0-401a-879a-a642ae8eed74 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Received event network-vif-plugged-984010df-e5b5-45c2-9db5-f0046f5efd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:17:30 compute-1 nova_compute[230183]: 2025-11-23 21:17:30.062 230187 DEBUG oslo_concurrency.lockutils [req-6a37eb5a-a1c1-44de-81d2-a23f9b2ac978 req-9af17b8e-dbe0-401a-879a-a642ae8eed74 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:17:30 compute-1 nova_compute[230183]: 2025-11-23 21:17:30.063 230187 DEBUG oslo_concurrency.lockutils [req-6a37eb5a-a1c1-44de-81d2-a23f9b2ac978 req-9af17b8e-dbe0-401a-879a-a642ae8eed74 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:17:30 compute-1 nova_compute[230183]: 2025-11-23 21:17:30.063 230187 DEBUG oslo_concurrency.lockutils [req-6a37eb5a-a1c1-44de-81d2-a23f9b2ac978 req-9af17b8e-dbe0-401a-879a-a642ae8eed74 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:17:30 compute-1 nova_compute[230183]: 2025-11-23 21:17:30.063 230187 DEBUG nova.compute.manager [req-6a37eb5a-a1c1-44de-81d2-a23f9b2ac978 req-9af17b8e-dbe0-401a-879a-a642ae8eed74 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] No waiting events found dispatching network-vif-plugged-984010df-e5b5-45c2-9db5-f0046f5efd50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 23 21:17:30 compute-1 nova_compute[230183]: 2025-11-23 21:17:30.063 230187 WARNING nova.compute.manager [req-6a37eb5a-a1c1-44de-81d2-a23f9b2ac978 req-9af17b8e-dbe0-401a-879a-a642ae8eed74 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Received unexpected event network-vif-plugged-984010df-e5b5-45c2-9db5-f0046f5efd50 for instance with vm_state deleted and task_state None.
Nov 23 21:17:30 compute-1 nova_compute[230183]: 2025-11-23 21:17:30.063 230187 DEBUG nova.compute.manager [req-6a37eb5a-a1c1-44de-81d2-a23f9b2ac978 req-9af17b8e-dbe0-401a-879a-a642ae8eed74 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Received event network-vif-deleted-984010df-e5b5-45c2-9db5-f0046f5efd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 23 21:17:30 compute-1 nova_compute[230183]: 2025-11-23 21:17:30.064 230187 DEBUG oslo_concurrency.processutils [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:17:30 compute-1 nova_compute[230183]: 2025-11-23 21:17:30.071 230187 DEBUG nova.compute.provider_tree [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:17:30 compute-1 nova_compute[230183]: 2025-11-23 21:17:30.083 230187 DEBUG nova.scheduler.client.report [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:17:30 compute-1 nova_compute[230183]: 2025-11-23 21:17:30.097 230187 DEBUG oslo_concurrency.lockutils [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:17:30 compute-1 nova_compute[230183]: 2025-11-23 21:17:30.119 230187 INFO nova.scheduler.client.report [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Deleted allocations for instance f638f2b4-bdf0-46c2-81d0-143511a01fb5
Nov 23 21:17:30 compute-1 nova_compute[230183]: 2025-11-23 21:17:30.170 230187 DEBUG oslo_concurrency.lockutils [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:17:30 compute-1 ceph-mon[80135]: pgmap v1083: 337 pgs: 337 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 312 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Nov 23 21:17:30 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3313607607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:17:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:17:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:30.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:17:31 compute-1 sudo[244590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 21:17:31 compute-1 sudo[244590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:17:31 compute-1 sudo[244590]: pam_unix(sudo:session): session closed for user root
Nov 23 21:17:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:31.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:32 compute-1 ceph-mon[80135]: pgmap v1084: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 892 KiB/s wr, 44 op/s
Nov 23 21:17:32 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:17:32 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:17:32 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:17:32 compute-1 nova_compute[230183]: 2025-11-23 21:17:32.806 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:17:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:32.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:17:33 compute-1 nova_compute[230183]: 2025-11-23 21:17:33.294 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:17:33 compute-1 podman[244617]: 2025-11-23 21:17:33.648887617 +0000 UTC m=+0.059270332 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 21:17:33 compute-1 podman[244616]: 2025-11-23 21:17:33.6879296 +0000 UTC m=+0.099103597 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 23 21:17:33 compute-1 nova_compute[230183]: 2025-11-23 21:17:33.731 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:33 compute-1 nova_compute[230183]: 2025-11-23 21:17:33.812 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:17:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:33.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:17:34 compute-1 ceph-mon[80135]: pgmap v1085: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Nov 23 21:17:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:34.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:35.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:36 compute-1 ceph-mon[80135]: pgmap v1086: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 13 KiB/s wr, 30 op/s
Nov 23 21:17:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:36.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:17:37 compute-1 podman[244664]: 2025-11-23 21:17:37.660361361 +0000 UTC m=+0.062357876 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 21:17:37 compute-1 nova_compute[230183]: 2025-11-23 21:17:37.833 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:38 compute-1 nova_compute[230183]: 2025-11-23 21:17:38.295 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:37.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:38 compute-1 ceph-mon[80135]: pgmap v1087: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 23 21:17:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:17:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:38.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:17:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:39.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:40 compute-1 ceph-mon[80135]: pgmap v1088: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 23 21:17:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:40.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:41.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:17:42 compute-1 ceph-mon[80135]: pgmap v1089: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 23 21:17:42 compute-1 nova_compute[230183]: 2025-11-23 21:17:42.866 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:17:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:42.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:17:43 compute-1 nova_compute[230183]: 2025-11-23 21:17:43.277 230187 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763932648.2768478, f638f2b4-bdf0-46c2-81d0-143511a01fb5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 23 21:17:43 compute-1 nova_compute[230183]: 2025-11-23 21:17:43.278 230187 INFO nova.compute.manager [-] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] VM Stopped (Lifecycle Event)
Nov 23 21:17:43 compute-1 nova_compute[230183]: 2025-11-23 21:17:43.301 230187 DEBUG nova.compute.manager [None req-14be77be-a56f-4a8c-896f-97bbe55e44e2 - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 23 21:17:43 compute-1 nova_compute[230183]: 2025-11-23 21:17:43.306 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:43.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:44 compute-1 ceph-mon[80135]: pgmap v1090: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:17:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:44.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.002000052s ======
Nov 23 21:17:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:45.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Nov 23 21:17:46 compute-1 ceph-mon[80135]: pgmap v1091: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:17:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:17:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:46.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:17:47 compute-1 sudo[244691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:17:47 compute-1 sudo[244691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:17:47 compute-1 sudo[244691]: pam_unix(sudo:session): session closed for user root
Nov 23 21:17:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:17:47 compute-1 nova_compute[230183]: 2025-11-23 21:17:47.868 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:47.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:48 compute-1 nova_compute[230183]: 2025-11-23 21:17:48.308 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:48 compute-1 ceph-mon[80135]: pgmap v1092: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:17:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:17:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:48.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:49.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:50 compute-1 ceph-mon[80135]: pgmap v1093: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:17:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:50.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:51.075 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:17:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:51.077 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:17:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:17:51.077 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:17:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:51.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:52 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:17:52 compute-1 ceph-mon[80135]: pgmap v1094: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:17:52 compute-1 nova_compute[230183]: 2025-11-23 21:17:52.873 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:52.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:53 compute-1 nova_compute[230183]: 2025-11-23 21:17:53.310 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:53.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:54 compute-1 ceph-mon[80135]: pgmap v1095: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:17:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:54.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:55.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:56 compute-1 ceph-mon[80135]: pgmap v1096: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:17:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:17:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:56.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:17:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:17:57 compute-1 nova_compute[230183]: 2025-11-23 21:17:57.874 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:57.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:58 compute-1 nova_compute[230183]: 2025-11-23 21:17:58.311 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:17:58 compute-1 ceph-mon[80135]: pgmap v1097: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:17:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:17:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:58.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:17:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:17:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:17:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:59.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:18:00 compute-1 ceph-mon[80135]: pgmap v1098: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:18:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:18:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:00.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:18:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:01.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:18:02 compute-1 ceph-mon[80135]: pgmap v1099: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:18:02 compute-1 nova_compute[230183]: 2025-11-23 21:18:02.875 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:18:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:02.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:03 compute-1 nova_compute[230183]: 2025-11-23 21:18:03.313 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:18:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.576031) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932683576078, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2366, "num_deletes": 251, "total_data_size": 6370831, "memory_usage": 6457888, "flush_reason": "Manual Compaction"}
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932683602686, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 4091898, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31373, "largest_seqno": 33734, "table_properties": {"data_size": 4082334, "index_size": 6058, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19859, "raw_average_key_size": 20, "raw_value_size": 4063205, "raw_average_value_size": 4184, "num_data_blocks": 261, "num_entries": 971, "num_filter_entries": 971, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763932480, "oldest_key_time": 1763932480, "file_creation_time": 1763932683, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 26733 microseconds, and 14584 cpu microseconds.
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.602757) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 4091898 bytes OK
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.602790) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.604608) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.604632) EVENT_LOG_v1 {"time_micros": 1763932683604625, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.604657) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 6360402, prev total WAL file size 6360402, number of live WAL files 2.
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.607267) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(3995KB)], [60(12MB)]
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932683607331, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 16732366, "oldest_snapshot_seqno": -1}
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 6234 keys, 14606260 bytes, temperature: kUnknown
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932683724503, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 14606260, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14565042, "index_size": 24532, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 159795, "raw_average_key_size": 25, "raw_value_size": 14453297, "raw_average_value_size": 2318, "num_data_blocks": 987, "num_entries": 6234, "num_filter_entries": 6234, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763932683, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.724757) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 14606260 bytes
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.725986) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.7 rd, 124.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 12.1 +0.0 blob) out(13.9 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 6755, records dropped: 521 output_compression: NoCompression
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.726012) EVENT_LOG_v1 {"time_micros": 1763932683726001, "job": 36, "event": "compaction_finished", "compaction_time_micros": 117241, "compaction_time_cpu_micros": 48731, "output_level": 6, "num_output_files": 1, "total_output_size": 14606260, "num_input_records": 6755, "num_output_records": 6234, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932683727071, "job": 36, "event": "table_file_deletion", "file_number": 62}
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932683729407, "job": 36, "event": "table_file_deletion", "file_number": 60}
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.607121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.729504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.729509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.729511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.729512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:18:03 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.729513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:18:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:18:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:03.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:18:04 compute-1 ovn_controller[132845]: 2025-11-23T21:18:04Z|00136|memory_trim|INFO|Detected inactivity (last active 30024 ms ago): trimming memory
Nov 23 21:18:04 compute-1 ceph-mon[80135]: pgmap v1100: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:18:04 compute-1 podman[244728]: 2025-11-23 21:18:04.671228395 +0000 UTC m=+0.071110789 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 21:18:04 compute-1 podman[244727]: 2025-11-23 21:18:04.693250493 +0000 UTC m=+0.108195499 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 23 21:18:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:04.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:05 compute-1 nova_compute[230183]: 2025-11-23 21:18:05.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:18:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:18:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:05.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:18:06 compute-1 nova_compute[230183]: 2025-11-23 21:18:06.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:18:06 compute-1 ceph-mon[80135]: pgmap v1101: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:18:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:06.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:07 compute-1 sudo[244773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:18:07 compute-1 sudo[244773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:18:07 compute-1 sudo[244773]: pam_unix(sudo:session): session closed for user root
Nov 23 21:18:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:18:07 compute-1 nova_compute[230183]: 2025-11-23 21:18:07.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:18:07 compute-1 nova_compute[230183]: 2025-11-23 21:18:07.455 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:18:07 compute-1 nova_compute[230183]: 2025-11-23 21:18:07.456 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:18:07 compute-1 nova_compute[230183]: 2025-11-23 21:18:07.456 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:18:07 compute-1 nova_compute[230183]: 2025-11-23 21:18:07.456 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:18:07 compute-1 nova_compute[230183]: 2025-11-23 21:18:07.456 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:18:07 compute-1 nova_compute[230183]: 2025-11-23 21:18:07.877 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:18:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:18:07 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4074893024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:18:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:07.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:07 compute-1 nova_compute[230183]: 2025-11-23 21:18:07.956 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:18:08 compute-1 podman[244822]: 2025-11-23 21:18:08.06088922 +0000 UTC m=+0.062611323 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd)
Nov 23 21:18:08 compute-1 nova_compute[230183]: 2025-11-23 21:18:08.113 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:18:08 compute-1 nova_compute[230183]: 2025-11-23 21:18:08.114 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4944MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:18:08 compute-1 nova_compute[230183]: 2025-11-23 21:18:08.114 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:18:08 compute-1 nova_compute[230183]: 2025-11-23 21:18:08.114 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:18:08 compute-1 nova_compute[230183]: 2025-11-23 21:18:08.181 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:18:08 compute-1 nova_compute[230183]: 2025-11-23 21:18:08.181 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:18:08 compute-1 nova_compute[230183]: 2025-11-23 21:18:08.204 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:18:08 compute-1 nova_compute[230183]: 2025-11-23 21:18:08.315 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:18:08 compute-1 ceph-mon[80135]: pgmap v1102: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:18:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/2931937022' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:18:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/2931937022' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:18:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/4074893024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:18:08 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:18:08 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3638365196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:18:08 compute-1 nova_compute[230183]: 2025-11-23 21:18:08.637 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:18:08 compute-1 nova_compute[230183]: 2025-11-23 21:18:08.642 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:18:08 compute-1 nova_compute[230183]: 2025-11-23 21:18:08.655 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:18:08 compute-1 nova_compute[230183]: 2025-11-23 21:18:08.673 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 23 21:18:08 compute-1 nova_compute[230183]: 2025-11-23 21:18:08.673 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:18:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:18:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:08.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:18:09 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3638365196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:18:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:09.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:10 compute-1 ceph-mon[80135]: pgmap v1103: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:18:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:10.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:11 compute-1 nova_compute[230183]: 2025-11-23 21:18:11.675 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:18:11 compute-1 nova_compute[230183]: 2025-11-23 21:18:11.675 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:18:11 compute-1 nova_compute[230183]: 2025-11-23 21:18:11.675 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 23 21:18:11 compute-1 nova_compute[230183]: 2025-11-23 21:18:11.676 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 23 21:18:11 compute-1 nova_compute[230183]: 2025-11-23 21:18:11.692 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 23 21:18:11 compute-1 nova_compute[230183]: 2025-11-23 21:18:11.692 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:18:11 compute-1 nova_compute[230183]: 2025-11-23 21:18:11.692 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:18:11 compute-1 nova_compute[230183]: 2025-11-23 21:18:11.692 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:18:11 compute-1 nova_compute[230183]: 2025-11-23 21:18:11.693 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:18:11 compute-1 nova_compute[230183]: 2025-11-23 21:18:11.693 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 23 21:18:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:18:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:11.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:18:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:18:12 compute-1 ceph-mon[80135]: pgmap v1104: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:18:12 compute-1 nova_compute[230183]: 2025-11-23 21:18:12.880 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:18:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:12.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:13 compute-1 nova_compute[230183]: 2025-11-23 21:18:13.317 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:18:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:18:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:13.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:18:14 compute-1 ceph-mon[80135]: pgmap v1105: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:18:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2575718697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:18:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:18:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:14.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:18:15 compute-1 nova_compute[230183]: 2025-11-23 21:18:15.440 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:18:15 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3445667768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:18:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:18:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:15.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:18:16 compute-1 ceph-mon[80135]: pgmap v1106: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:18:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:16.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:18:17 compute-1 nova_compute[230183]: 2025-11-23 21:18:17.883 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:18:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:17.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:18 compute-1 nova_compute[230183]: 2025-11-23 21:18:18.319 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:18:18 compute-1 ceph-mon[80135]: pgmap v1107: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:18:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:18:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:18.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:18:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:19.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:18:20 compute-1 ceph-mon[80135]: pgmap v1108: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:18:20 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2160981792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:18:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:20.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:21 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3600494203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:18:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:18:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:21.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:18:22 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:18:22 compute-1 ceph-mon[80135]: pgmap v1109: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:18:22 compute-1 nova_compute[230183]: 2025-11-23 21:18:22.885 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:18:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:23.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:23 compute-1 nova_compute[230183]: 2025-11-23 21:18:23.321 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:18:23 compute-1 ceph-mon[80135]: pgmap v1110: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:18:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:23.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:18:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:25.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:18:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:25.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:26 compute-1 ceph-mon[80135]: pgmap v1111: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:18:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:27.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:27 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:18:27 compute-1 sudo[244875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:18:27 compute-1 sudo[244875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:18:27 compute-1 sudo[244875]: pam_unix(sudo:session): session closed for user root
Nov 23 21:18:27 compute-1 nova_compute[230183]: 2025-11-23 21:18:27.939 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:18:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:27.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:28 compute-1 ceph-mon[80135]: pgmap v1112: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:18:28 compute-1 sshd-session[244901]: Accepted publickey for zuul from 192.168.122.10 port 44226 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 21:18:28 compute-1 systemd-logind[793]: New session 55 of user zuul.
Nov 23 21:18:28 compute-1 systemd[1]: Started Session 55 of User zuul.
Nov 23 21:18:28 compute-1 sshd-session[244901]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 21:18:28 compute-1 nova_compute[230183]: 2025-11-23 21:18:28.323 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:18:28 compute-1 sudo[244905]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 23 21:18:28 compute-1 sudo[244905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:18:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:29.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:18:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:29.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:18:30 compute-1 ceph-mon[80135]: pgmap v1113: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:18:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:18:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:31.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:18:31 compute-1 sudo[245121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:18:31 compute-1 sudo[245121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:18:31 compute-1 sudo[245121]: pam_unix(sudo:session): session closed for user root
Nov 23 21:18:31 compute-1 sudo[245159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 21:18:31 compute-1 sudo[245159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:18:31 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 23 21:18:31 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3061329609' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 23 21:18:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:31.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:32 compute-1 ceph-mon[80135]: from='client.26423 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:32 compute-1 ceph-mon[80135]: from='client.16422 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:32 compute-1 ceph-mon[80135]: pgmap v1114: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:18:32 compute-1 ceph-mon[80135]: from='client.26435 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:32 compute-1 ceph-mon[80135]: from='client.25879 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:32 compute-1 ceph-mon[80135]: from='client.16437 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:32 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3061329609' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 23 21:18:32 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1637570107' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 23 21:18:32 compute-1 sudo[245159]: pam_unix(sudo:session): session closed for user root
Nov 23 21:18:32 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:18:32 compute-1 nova_compute[230183]: 2025-11-23 21:18:32.978 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:18:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:33.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:33 compute-1 ceph-mon[80135]: from='client.25891 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:18:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 21:18:33 compute-1 ceph-mon[80135]: pgmap v1115: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 21:18:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:18:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:18:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 21:18:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 21:18:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:18:33 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1092092571' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 23 21:18:33 compute-1 nova_compute[230183]: 2025-11-23 21:18:33.326 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:18:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:33.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:34 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:18:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:35.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:35 compute-1 ceph-mon[80135]: pgmap v1116: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Nov 23 21:18:35 compute-1 podman[245328]: 2025-11-23 21:18:35.642695083 +0000 UTC m=+0.052878902 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 23 21:18:35 compute-1 podman[245327]: 2025-11-23 21:18:35.670631819 +0000 UTC m=+0.080600333 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 23 21:18:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:35.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:18:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:37.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:18:37 compute-1 sudo[245385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 21:18:37 compute-1 sudo[245385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:18:37 compute-1 sudo[245385]: pam_unix(sudo:session): session closed for user root
Nov 23 21:18:37 compute-1 ovs-vsctl[245424]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 23 21:18:37 compute-1 sshd-session[244871]: Connection closed by 162.142.125.201 port 63292 [preauth]
Nov 23 21:18:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:18:37 compute-1 ceph-mon[80135]: pgmap v1117: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 21:18:37 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:18:37 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:18:37 compute-1 nova_compute[230183]: 2025-11-23 21:18:37.977 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:18:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:37.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:38 compute-1 virtqemud[229705]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 23 21:18:38 compute-1 virtqemud[229705]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 23 21:18:38 compute-1 virtqemud[229705]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 23 21:18:38 compute-1 nova_compute[230183]: 2025-11-23 21:18:38.328 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:18:38 compute-1 podman[245639]: 2025-11-23 21:18:38.668622558 +0000 UTC m=+0.066716432 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3)
Nov 23 21:18:38 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: cache status {prefix=cache status} (starting...)
Nov 23 21:18:38 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 21:18:38 compute-1 lvm[245758]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 21:18:38 compute-1 lvm[245758]: VG ceph_vg0 finished
Nov 23 21:18:38 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: client ls {prefix=client ls} (starting...)
Nov 23 21:18:38 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 21:18:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:18:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:39.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:18:39 compute-1 ceph-mon[80135]: pgmap v1118: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 21:18:39 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: damage ls {prefix=damage ls} (starting...)
Nov 23 21:18:39 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 21:18:39 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 23 21:18:39 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2288206649' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 23 21:18:39 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: dump loads {prefix=dump loads} (starting...)
Nov 23 21:18:39 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 21:18:39 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 23 21:18:39 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 21:18:39 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 23 21:18:39 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 21:18:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:18:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:39.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:18:40 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 21:18:40 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/664271478' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:18:40 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 23 21:18:40 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 21:18:40 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 23 21:18:40 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 21:18:40 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Nov 23 21:18:40 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2649457878' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 23 21:18:40 compute-1 ceph-mon[80135]: from='client.26456 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:40 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2288206649' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 23 21:18:40 compute-1 ceph-mon[80135]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 23 21:18:40 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/664271478' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:18:40 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2649457878' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 23 21:18:40 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 23 21:18:40 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 21:18:40 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 23 21:18:40 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 21:18:40 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: ops {prefix=ops} (starting...)
Nov 23 21:18:40 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 21:18:40 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Nov 23 21:18:40 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/544923333' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 23 21:18:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:41.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:41 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Nov 23 21:18:41 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/724445792' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 23 21:18:41 compute-1 ceph-mon[80135]: from='client.26468 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:41 compute-1 ceph-mon[80135]: from='client.26480 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:41 compute-1 ceph-mon[80135]: pgmap v1119: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Nov 23 21:18:41 compute-1 ceph-mon[80135]: from='client.16479 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:41 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3777815802' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 23 21:18:41 compute-1 ceph-mon[80135]: from='client.26498 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:41 compute-1 ceph-mon[80135]: from='client.25912 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:41 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/544923333' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 23 21:18:41 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3489410471' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 23 21:18:41 compute-1 ceph-mon[80135]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 23 21:18:41 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2030825737' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:18:41 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/724445792' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 23 21:18:41 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3355703499' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:18:41 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2700110618' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 23 21:18:41 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 23 21:18:41 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2661117439' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 23 21:18:41 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: session ls {prefix=session ls} (starting...)
Nov 23 21:18:41 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 21:18:41 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: status {prefix=status} (starting...)
Nov 23 21:18:41 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Nov 23 21:18:41 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4036111529' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 23 21:18:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:42.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 23 21:18:42 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2193134433' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 23 21:18:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 23 21:18:42 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/190622379' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 23 21:18:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:18:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 23 21:18:42 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2375060761' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 23 21:18:42 compute-1 ceph-mon[80135]: from='client.16497 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:42 compute-1 ceph-mon[80135]: from='client.25927 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:42 compute-1 ceph-mon[80135]: from='client.26534 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:42 compute-1 ceph-mon[80135]: from='client.16512 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:42 compute-1 ceph-mon[80135]: from='client.25942 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:42 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2661117439' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 23 21:18:42 compute-1 ceph-mon[80135]: from='client.26558 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:42 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2163106279' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 23 21:18:42 compute-1 ceph-mon[80135]: from='client.16524 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:42 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/4036111529' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 23 21:18:42 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2193134433' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 23 21:18:42 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/190622379' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 23 21:18:42 compute-1 ceph-mon[80135]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 23 21:18:42 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1602067548' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 23 21:18:42 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2372991850' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 23 21:18:42 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1648417367' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 23 21:18:42 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2375060761' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 23 21:18:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Nov 23 21:18:42 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2923257280' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 23 21:18:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 23 21:18:42 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1796376712' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 21:18:42 compute-1 nova_compute[230183]: 2025-11-23 21:18:42.982 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:18:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:43.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:43 compute-1 nova_compute[230183]: 2025-11-23 21:18:43.330 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:18:43 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Nov 23 21:18:43 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1143614854' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 23 21:18:43 compute-1 ceph-mon[80135]: from='client.25954 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:43 compute-1 ceph-mon[80135]: from='client.16548 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:43 compute-1 ceph-mon[80135]: pgmap v1120: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 21:18:43 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2923257280' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 23 21:18:43 compute-1 ceph-mon[80135]: from='client.25975 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:43 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2990081815' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 23 21:18:43 compute-1 ceph-mon[80135]: from='client.16569 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:43 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3156995856' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 23 21:18:43 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1796376712' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 21:18:43 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3207484633' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 23 21:18:43 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2741792213' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 23 21:18:43 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/4136860673' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 23 21:18:43 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1143614854' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 23 21:18:43 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2429490044' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 23 21:18:43 compute-1 ceph-mon[80135]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 23 21:18:43 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Nov 23 21:18:43 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3636856428' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 23 21:18:43 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 23 21:18:43 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3023136414' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 23 21:18:43 compute-1 sshd-session[246429]: Invalid user solana from 161.35.133.66 port 34894
Nov 23 21:18:43 compute-1 sshd-session[246429]: Connection closed by invalid user solana 161.35.133.66 port 34894 [preauth]
Nov 23 21:18:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:18:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:44.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:18:44 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Nov 23 21:18:44 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3945838404' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 23 21:18:44 compute-1 ceph-mon[80135]: from='client.25987 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:44 compute-1 ceph-mon[80135]: from='client.26630 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:44 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1201040668' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 23 21:18:44 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3636856428' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 23 21:18:44 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1287440862' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 23 21:18:44 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/121255193' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 23 21:18:44 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3023136414' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 23 21:18:44 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2743513132' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 21:18:44 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3962888677' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 23 21:18:44 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3945838404' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 23 21:18:44 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3690575320' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 21:18:44 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2016517135' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 23 21:18:44 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3173105010' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 23 21:18:44 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 23 21:18:44 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/898839678' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 23 21:18:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:45.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:45 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 23 21:18:45 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2696775381' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:26.123652+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981010 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 3465216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:27.123799+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 3457024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:28.123958+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.667137146s of 13.731669426s, submitted: 4
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 3457024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:29.124153+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3448832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:30.124301+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3448832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:31.124505+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3448832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:32.124644+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3440640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:33.124793+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3440640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:34.124929+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 3432448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:35.125060+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 3432448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:36.125226+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 3424256 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:37.125371+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 3424256 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:38.125572+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 3424256 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:39.125733+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3407872 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:40.125923+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3407872 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:41.126077+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 3399680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:42.126237+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 3399680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:43.126423+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 3399680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:44.126569+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3391488 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:45.126718+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 3383296 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:46.126931+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 3383296 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:47.127264+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 3383296 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:48.127410+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 3375104 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:49.127587+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 3375104 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:50.127760+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 3366912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:51.127937+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 3366912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:52.128117+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3358720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:53.128259+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3358720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:54.128418+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3358720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:55.128568+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3358720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:56.128703+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 3350528 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:57.128836+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 3350528 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:58.128965+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 3342336 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:46:59.129127+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 3342336 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:00.129240+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3334144 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:01.129372+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3334144 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:02.129550+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 3325952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:03.129798+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 3325952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:04.129990+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3309568 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:05.130130+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3309568 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:06.130609+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3309568 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:07.130784+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 3301376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:08.130925+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 3301376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:09.131069+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3293184 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:10.131259+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3293184 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:11.131468+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3293184 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:12.131818+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3284992 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:13.132050+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3284992 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:14.132201+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3276800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:15.132360+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3276800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:16.132543+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 3268608 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:17.132673+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 3268608 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:18.132783+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 3268608 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:19.133019+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 3260416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:20.133162+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 3260416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:21.133284+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 3260416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:22.133418+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 3252224 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:23.133550+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 3252224 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:24.133970+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 3235840 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:25.134203+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 3235840 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:26.134500+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 3235840 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:27.134653+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3227648 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:28.134814+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3227648 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:29.135030+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 3219456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:30.135314+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 3219456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:31.135469+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 3211264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:32.135629+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 3211264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:33.135791+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3203072 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:34.135938+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3203072 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:35.136072+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 3194880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:36.136346+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 3194880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:37.136506+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 3194880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:38.136649+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 3194880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:39.136848+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3186688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:40.137114+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3186688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:41.137245+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3178496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:42.137496+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3178496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:43.137676+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 3170304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:44.137926+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 3170304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:45.138380+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 3170304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:46.138497+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3162112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:47.138624+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3162112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:48.138811+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:49.139067+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 3145728 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:50.139300+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 3145728 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:51.139525+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3137536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:52.139691+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3137536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:53.139935+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3137536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:54.140144+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3137536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:55.140383+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3129344 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:56.140690+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3129344 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:57.140967+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 3121152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:58.141136+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 3121152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:47:59.141358+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3112960 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:00.141538+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3112960 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a935c00 session 0x55805cd1d4a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9800 session 0x55805a7e63c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:01.141662+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3112960 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:02.141788+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3104768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:03.141928+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3104768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:04.142131+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3104768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:05.142261+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82845696 unmapped: 3096576 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:06.142463+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82845696 unmapped: 3096576 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:07.529154+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3088384 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:08.529343+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3088384 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:09.529590+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 3080192 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:10.529814+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 3072000 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:11.529929+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 3063808 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a935800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 103.327316284s of 103.344772339s, submitted: 1
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:12.530118+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 3047424 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981010 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:13.530334+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 3047424 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:14.530463+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 3039232 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:15.530597+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 3039232 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:16.530778+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 3031040 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:17.530935+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 3031040 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981010 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:18.531115+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 3022848 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:19.531298+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 3014656 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:20.531452+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 3014656 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:21.531624+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82935808 unmapped: 3006464 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:22.531761+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82935808 unmapped: 3006464 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979828 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:23.531949+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82935808 unmapped: 3006464 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:24.532122+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82944000 unmapped: 2998272 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:25.532261+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82944000 unmapped: 2998272 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.843829155s of 14.855600357s, submitted: 3
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:26.532504+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82944000 unmapped: 2998272 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:27.532692+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82960384 unmapped: 2981888 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979696 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:28.532999+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82960384 unmapped: 2981888 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:29.533220+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82968576 unmapped: 2973696 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:30.533346+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82968576 unmapped: 2973696 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:31.533495+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82976768 unmapped: 2965504 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c000 session 0x55805cf8c5a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c400 session 0x55805b2a52c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:32.533637+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82976768 unmapped: 2965504 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979696 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:33.533830+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82976768 unmapped: 2965504 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:34.534035+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 2949120 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:35.534268+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 2949120 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:36.534438+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 2940928 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:37.534618+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 2940928 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979696 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:38.534782+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 2932736 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:39.534967+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 2932736 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:40.535205+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 2932736 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:41.535346+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 2924544 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:42.535526+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 2924544 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979696 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.296117783s of 16.299619675s, submitted: 1
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:43.535657+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 2916352 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:44.535776+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 2908160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:45.535951+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 2908160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:46.536078+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 2908160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:47.536209+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 2899968 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979828 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:48.536442+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 2899968 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24d000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:49.536759+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 2891776 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:50.536889+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 2891776 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:51.537023+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 2883584 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:52.537147+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 2883584 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981340 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:53.537278+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 2875392 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:54.537424+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 2867200 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:55.537551+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 2867200 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:56.537754+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 2859008 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:57.537967+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 2859008 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981340 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:58.538110+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 2850816 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:48:59.538299+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.764848709s of 16.771341324s, submitted: 2
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 2850816 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:00.538432+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 2850816 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:01.538592+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 2842624 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:02.539064+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 2842624 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981208 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:03.539180+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 2834432 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:04.539372+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 2834432 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:05.539739+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 2834432 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:06.539849+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 2826240 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:07.539972+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 2826240 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981208 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:08.540093+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 2818048 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:09.540258+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 2818048 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:10.540408+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 2809856 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:11.540616+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 2809856 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:12.540735+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 2809856 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981208 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:13.540999+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 2801664 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:14.541254+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 2801664 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:15.541430+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 2801664 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24d000 session 0x55805d8bf0e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9000 session 0x55805d92b680
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:16.541595+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 2793472 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:17.541737+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 2793472 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981208 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:18.541915+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 2785280 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:19.542136+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 2777088 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:20.542346+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 2777088 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:21.542518+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 2768896 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:22.542643+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 2768896 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981208 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:23.542798+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 2760704 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:24.542972+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 2760704 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:25.543159+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 2752512 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:26.543329+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.872358322s of 27.248651505s, submitted: 1
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 2752512 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:27.543512+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 2752512 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981340 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:28.543694+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 2744320 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:29.543893+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 2744320 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:30.544046+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 1687552 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:31.544208+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 1687552 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:32.544330+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 1679360 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982852 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:33.544464+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 1679360 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:34.544610+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 1679360 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:35.544758+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 1671168 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:36.544890+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 1671168 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:37.545001+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 1671168 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982852 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:38.545160+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 1662976 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.098936081s of 12.109712601s, submitted: 2
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:39.545375+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 1654784 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:40.545497+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 1646592 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:41.545630+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 1646592 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:42.545785+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 1638400 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:43.545934+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 1638400 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:44.546073+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 1630208 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:45.546237+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 1630208 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:46.546367+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 1630208 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:47.546504+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 1622016 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:48.546668+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 1622016 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:49.546847+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 1613824 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:50.547010+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 1613824 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:51.547159+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 1613824 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:52.547265+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 1605632 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:53.547382+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 1605632 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:54.547541+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 1589248 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:55.547740+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 1589248 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:56.547921+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 1662976 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:57.548039+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 1662976 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:58.548248+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 1654784 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:49:59.548453+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 1654784 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:00.548689+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 1654784 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:01.548934+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 1646592 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:02.549074+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 1646592 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:03.549224+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 1638400 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:04.549366+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 1630208 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 8328 writes, 34K keys, 8328 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 8328 writes, 1694 syncs, 4.92 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8328 writes, 34K keys, 8328 commit groups, 1.0 writes per commit group, ingest: 21.45 MB, 0.04 MB/s
                                           Interval WAL: 8328 writes, 1694 syncs, 4.92 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5580590769b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5580590769b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5580590769b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:05.549504+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84369408 unmapped: 1572864 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:06.549654+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 1564672 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:07.549782+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 1564672 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:08.550079+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 1556480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:09.550231+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 1556480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:10.550397+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 1556480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:11.550535+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 1548288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:12.550705+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 1548288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:13.550901+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 1540096 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:14.551029+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 1540096 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:15.551181+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 1540096 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:16.551324+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 1531904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:17.551525+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 1531904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:18.551649+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 1523712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:19.551840+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 1523712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:20.551986+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 1515520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:21.552186+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 1515520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:22.552306+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 1515520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:23.552464+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 1507328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:24.552653+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 1499136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:25.552815+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 1490944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:26.552938+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 1490944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:27.553129+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 1490944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:28.553256+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84459520 unmapped: 1482752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:29.553427+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84459520 unmapped: 1482752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:30.553585+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 1474560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:31.553715+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 1474560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:32.553993+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 1466368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:33.554208+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 1466368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:34.554428+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 1466368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:35.554652+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 1458176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:36.554833+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 1458176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:37.554941+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 1449984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:38.555092+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 1449984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:39.555301+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 1449984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:40.555515+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 1441792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:41.555635+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 1441792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:42.555810+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 1433600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:43.555911+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 1433600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:44.556002+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 1425408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:45.556138+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 1425408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:46.556300+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 1417216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:47.556448+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 1417216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:48.556612+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 1400832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:49.556772+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 1400832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:50.556901+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 1400832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:51.557091+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 1392640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:52.557240+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 1392640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:53.557360+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 1392640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:54.557996+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 1384448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:55.558209+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 1384448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:56.558433+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1368064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:57.558576+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1368064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:58.558705+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1359872 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:50:59.558896+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1359872 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:00.559069+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1351680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:01.559231+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1351680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:02.559404+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1351680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:03.559565+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 1343488 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:04.559725+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 1343488 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:05.560015+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 1335296 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:06.560152+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 1335296 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:07.560303+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 1335296 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:08.560489+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 1318912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:09.560691+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 1318912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:10.560847+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 1310720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:11.560974+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 1310720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:12.561124+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805ac0a000 session 0x55805cf8d4a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805c7ff400 session 0x55805cfb2f00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 1302528 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:13.561282+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 1302528 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:14.561409+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 1302528 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:15.561560+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 1294336 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:16.561712+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 1294336 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:17.561843+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 1286144 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:18.562003+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 1286144 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:19.562174+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 1286144 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:20.562376+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 1277952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:21.562559+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 1277952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:22.562803+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 1269760 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:23.562984+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 104.574203491s of 104.905845642s, submitted: 2
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 1269760 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:24.563133+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 1269760 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:25.563303+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 1261568 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:26.563462+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 1261568 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:27.563658+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982261 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 1253376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:28.563882+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 1253376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:29.564117+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 1253376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:30.564244+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 1245184 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:31.564425+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 1245184 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:32.564556+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983773 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 1236992 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:33.564678+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 1236992 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:34.564799+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 1228800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:35.564997+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.095993042s of 12.122361183s, submitted: 2
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 1228800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:36.565180+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 1220608 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:37.565341+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983182 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 1220608 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:38.565560+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 1212416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:39.565847+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 1212416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:40.566005+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 1204224 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:41.566173+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 1196032 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:42.566363+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983050 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 1196032 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:43.566536+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 1187840 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:44.566661+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 1187840 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:45.566839+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 1179648 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:46.566973+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 1179648 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:47.567197+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983050 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 1179648 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:48.567374+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 1171456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:49.567641+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 1171456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:50.567832+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 1163264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:51.567964+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 1163264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:52.568127+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983050 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 1163264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:53.568357+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 1155072 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:54.568509+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 1155072 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:55.568669+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 1146880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:56.568812+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 1146880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:57.568979+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983050 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 1138688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:58.569098+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 1138688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:51:59.569254+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 1138688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:00.569409+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 1130496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:01.569600+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 1130496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:02.569733+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983050 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 1122304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:03.569951+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 1122304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:04.570084+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 1114112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:05.570298+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a935800 session 0x55805d8be000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 1114112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:06.570463+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 1114112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:07.570642+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983050 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 1114112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:08.570791+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 1114112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:09.570961+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 1114112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:10.571090+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 1114112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:11.571284+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 1114112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:12.571444+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983050 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 1105920 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:13.571608+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 1105920 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:14.571812+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 38.643814087s of 38.652179718s, submitted: 2
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 1056768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:15.571916+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 1949696 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:16.572037+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:17.572249+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983050 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:18.572432+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:19.572581+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:20.572740+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:21.573115+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:22.573271+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983050 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:23.573465+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:24.573590+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24d000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.975281239s of 10.038576126s, submitted: 382
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:25.575065+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:26.575426+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:27.575635+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9000 session 0x55805d5663c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9800 session 0x55805d8bf680
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983182 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:28.575837+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:29.576064+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:30.576222+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a935800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:31.576391+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:32.576586+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c400 session 0x55805a6730e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c000 session 0x55805a67af00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984562 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:33.576854+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:34.577050+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:35.577350+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:36.577735+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:37.577947+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984562 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:38.578126+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.571740150s of 14.652972221s, submitted: 3
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:39.578291+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:40.578456+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:41.578610+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:42.578896+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:43.579035+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984694 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805ac0a000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:44.579186+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:45.579415+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:46.579646+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:47.579772+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:48.579971+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984826 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:49.580132+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805c7ff400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.419490814s of 10.425830841s, submitted: 2
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:50.580273+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:51.580465+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:52.580644+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:53.580760+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986206 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9000 session 0x55805cd7d680
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:54.580914+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:55.581030+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:56.581182+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:57.581359+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:58.581492+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985615 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:52:59.581664+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:00.581820+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:01.582005+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:02.582137+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:03.582239+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985483 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.834005356s of 14.852039337s, submitted: 4
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:04.582335+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:05.582470+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:06.582595+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:07.582723+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:08.582915+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985615 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:09.583127+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:10.583287+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a7f0000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:11.583473+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:12.583649+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:13.583899+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987127 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:14.584010+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:15.584155+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:16.584307+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:17.584465+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:18.584622+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986536 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.555835724s of 14.712920189s, submitted: 4
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:19.584824+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:20.584917+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a935800 session 0x55805d3f1c20
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24d000 session 0x55805cfb2000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:21.584981+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:22.585111+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:23.585264+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986404 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:24.585388+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:25.585552+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:26.585732+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:27.585884+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:28.586040+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986404 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:29.586211+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:30.586352+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:31.586493+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a935800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.893251419s of 12.898416519s, submitted: 1
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:32.586731+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:33.586968+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986536 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:34.587098+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:35.587306+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:36.587552+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805c7ff400 session 0x55805b7d63c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805ac0a000 session 0x55805c4554a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:37.587726+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:38.588009+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986536 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:39.588251+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:40.588411+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:41.588552+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:42.588749+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:43.588903+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985945 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:44.589059+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:45.589228+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:46.589361+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:47.589545+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.695014000s of 15.724806786s, submitted: 2
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:48.589705+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986077 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:49.589850+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:50.589975+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:51.590137+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:52.590351+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:53.590480+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985945 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:54.590632+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:55.590932+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:56.591115+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:57.591322+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:58.591500+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985945 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:53:59.591696+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:00.591802+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a935800 session 0x55805cf8c000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:01.592051+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:02.592305+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:03.592458+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.657769203s of 15.741616249s, submitted: 2
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985813 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:04.592606+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:05.592819+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:06.593077+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:07.593195+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:08.593361+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985813 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:09.593537+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:10.593647+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:11.593781+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:12.593923+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:13.594044+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985945 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:14.594266+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:15.594413+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:16.594537+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:17.594667+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:18.594825+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985945 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:19.595003+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.062019348s of 16.124525070s, submitted: 2
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:20.595139+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:21.595259+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:22.595390+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:23.595529+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987457 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:24.595655+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:25.595777+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:26.595940+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:27.596060+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:28.596214+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987325 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:29.596400+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:30.596539+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:31.596714+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:32.597003+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:33.597168+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987325 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:34.597513+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:35.598175+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:36.598335+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:37.598564+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:38.599567+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987325 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:39.600048+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:40.601152+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:41.601765+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c400 session 0x55805cc7cb40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c000 session 0x55805cd75680
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:42.601982+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:43.602099+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987325 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:44.602460+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:45.602695+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:46.602882+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:47.602999+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:48.603142+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987325 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:49.603278+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:50.603398+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:51.603586+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:52.603717+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a935800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 33.114776611s of 33.127079010s, submitted: 2
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:53.603917+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987457 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:54.604103+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:55.604208+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:56.604318+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:57.604434+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:58.604571+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805ac0a000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988969 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:54:59.604754+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:00.604898+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:01.605021+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:02.605214+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.612201691s of 10.619262695s, submitted: 2
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:03.605348+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988837 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:04.605548+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:05.605756+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:06.605987+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:07.606131+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:08.606293+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:09.606470+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:10.606664+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:11.606907+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:12.607112+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:13.607296+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:14.607416+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:15.607528+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:16.607675+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:17.607781+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:18.607957+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:19.608098+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:20.608232+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:21.608374+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:22.608490+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:23.608631+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:24.608817+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:25.608989+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:26.609154+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:27.609306+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:28.609445+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:29.609633+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:30.609774+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:31.609908+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:32.610216+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:33.610331+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:34.610464+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:35.610635+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:36.610843+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:37.610940+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:38.611115+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:39.611339+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:40.611577+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:41.611711+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:42.611896+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:43.612127+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:44.612264+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:45.612406+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:46.612558+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:47.612724+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:48.612951+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:49.613129+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:50.613570+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9000 session 0x55805c452000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:51.613711+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:52.613843+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:53.613944+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:54.614072+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:55.614194+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:56.614314+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:57.614687+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:58.614860+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:59.615088+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:00.615287+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:01.615471+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 58.244842529s of 58.250808716s, submitted: 2
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:02.615656+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:03.615792+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988378 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:04.615910+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:05.616065+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:06.616222+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:07.616378+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:08.616552+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989890 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:09.616698+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:10.616938+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:11.617208+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:12.617434+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:13.617609+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989299 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:14.617759+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:15.617928+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:16.618122+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:17.618256+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.113847733s of 16.138629913s, submitted: 3
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:18.618389+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989167 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:19.618582+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:20.618950+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c400 session 0x55805c633860
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:21.619131+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:22.619296+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:23.619481+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989167 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:24.619608+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:25.619736+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:26.619904+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:27.620012+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:28.620157+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989167 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:29.620350+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:30.620496+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:31.620699+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805c7ff400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.863085747s of 13.866735458s, submitted: 1
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:32.620830+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:33.621047+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989299 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:34.621197+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:35.621337+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:36.621527+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:37.621650+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:38.621770+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990811 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:39.621924+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:40.622157+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:41.622327+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:42.622451+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:43.622578+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990220 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:44.622999+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:45.623665+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:46.623784+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.973569870s of 14.986274719s, submitted: 3
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805c7ff400 session 0x55805cc7e5a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:47.623949+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:48.624061+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805ac0a000 session 0x55805a7e6b40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a935800 session 0x55805d8adc20
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990088 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:49.624183+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:50.624357+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:51.624724+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:52.625058+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:53.625176+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990088 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:54.625340+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:55.625547+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:56.625787+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:57.625954+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a935800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.993670464s of 10.996788025s, submitted: 1
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:58.626105+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990220 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:59.626289+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:00.626418+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:01.626546+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:02.626692+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:03.626889+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805ac0a000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991864 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:04.627021+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:05.627133+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:06.627344+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:07.627490+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 1753088 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:08.627737+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 1753088 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.335764885s of 11.355053902s, submitted: 4
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992785 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:09.627966+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 1753088 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:10.628161+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 1753088 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:11.628304+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 1753088 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:12.628425+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:13.628542+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992653 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:14.628670+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:15.628803+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:16.628945+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:17.629179+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:18.629318+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:19.629480+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:20.629599+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:21.629819+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:22.629956+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:23.630123+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:24.630281+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:25.630579+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:26.630762+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:27.630981+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:28.631125+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:29.631344+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:30.631474+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:31.631643+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:32.631860+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:33.632030+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:34.632199+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:35.632410+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:36.632622+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:37.632857+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d6b7400 session 0x55805c452b40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a7f0000 session 0x55805d25f0e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:38.633085+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:39.633255+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:40.633426+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:41.633621+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:42.633753+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:43.633929+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:44.634055+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:45.634197+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:46.634324+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:47.634507+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:48.634632+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:49.635197+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:50.635367+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:51.636027+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:52.636233+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d651c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 43.579681396s of 43.622188568s, submitted: 3
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:53.636917+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:54.637043+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992653 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:55.637317+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:56.637535+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:57.637656+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:58.637796+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d650800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:59.637944+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995677 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:00.638069+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:01.638189+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:02.638312+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:03.638458+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.178874969s of 11.839152336s, submitted: 5
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:04.638634+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994363 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:05.638849+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:06.639029+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:07.639199+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:08.639317+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:09.639467+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994363 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:10.639684+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:11.639891+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:12.640080+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:13.640283+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:14.640481+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994363 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:15.640613+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:16.640778+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:17.641022+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:18.641156+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:19.641361+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994363 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:20.641543+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:21.641701+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:22.641954+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805ac0a000 session 0x55805cd7cb40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a935800 session 0x55805d565c20
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:23.642120+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:24.642811+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994363 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:25.643091+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:26.643218+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:27.643362+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:28.643513+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:29.643887+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994363 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:30.644265+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:31.644413+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:32.644575+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:33.644720+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 29.315622330s of 29.319524765s, submitted: 1
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:34.644899+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994495 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d64ac00 session 0x55805cd1c960
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d64ac00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:35.645253+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:36.645575+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:37.645701+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:38.645957+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:39.646236+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a7f0000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999031 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:40.646468+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:41.646631+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:42.646844+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:43.646970+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:44.647127+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997849 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:45.647277+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:46.647401+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:47.647523+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.826273918s of 13.858831406s, submitted: 6
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:48.647727+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:49.647956+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997717 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:50.648080+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:51.648235+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:52.648429+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:53.648655+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:54.648831+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997717 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:55.649257+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:56.649413+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c400 session 0x55805d564f00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9000 session 0x55805b7d72c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:57.649814+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:58.649976+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:59.650169+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997717 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:00.650706+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:01.651061+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:02.651187+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:03.651309+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:04.651713+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997717 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:05.652081+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:06.652395+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:07.652519+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a935800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.386011124s of 20.388910294s, submitted: 1
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:08.652688+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:09.652837+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997849 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:10.652980+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:11.653128+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:12.653299+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:13.653538+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805ac0a000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:14.653683+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997849 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:15.653974+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:16.654189+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d650800 session 0x55805a67a960
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d651c00 session 0x55805a7e5860
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:17.654346+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:18.654583+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:19.654960+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997258 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.066018105s of 12.072667122s, submitted: 2
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:20.655091+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:21.655224+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:22.655376+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:23.655572+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:24.655697+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:25.655822+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:26.655920+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:27.656139+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:28.656266+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:29.656457+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996667 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:30.656591+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:31.656714+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:32.656907+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:33.657061+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:34.657197+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996667 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:35.657355+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:36.657480+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:37.657603+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:38.657725+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:39.657952+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996667 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:40.658063+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:41.658157+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:42.658297+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.506420135s of 22.516693115s, submitted: 3
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:43.658426+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:44.658619+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:45.658821+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:46.659077+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:47.659262+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:48.659472+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:49.659680+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:50.659854+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:51.660045+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:52.660299+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:53.660482+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:54.660667+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:55.660884+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:56.661097+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:57.661311+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:58.661503+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:59.661724+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:00.661950+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:01.662280+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:02.662541+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 1679360 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:03.662829+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 1679360 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:04.663096+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 9173 writes, 35K keys, 9173 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9173 writes, 2093 syncs, 4.38 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 845 writes, 1350 keys, 845 commit groups, 1.0 writes per commit group, ingest: 0.45 MB, 0.00 MB/s
                                           Interval WAL: 845 writes, 399 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5580590769b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5580590769b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5580590769b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:05.663243+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:06.663410+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:07.663556+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:08.663703+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:09.663932+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:10.664098+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:11.664251+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:12.664405+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:13.664551+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:14.664690+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:15.664829+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:16.664921+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:17.665096+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:18.665290+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a7f0000 session 0x55805b434b40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9c00 session 0x55805d3f05a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:19.665500+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:20.665688+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread fragmentation_score=0.000031 took=0.000034s
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:21.665893+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:22.666116+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:23.666288+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:24.666455+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:25.666682+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:26.666850+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:27.667059+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:28.667190+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:29.667411+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a7f0000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 47.324840546s of 47.329608917s, submitted: 1
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996667 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:30.667547+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:31.667692+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:32.667809+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:33.667984+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:34.668145+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996667 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:35.668317+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:36.668544+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:37.668749+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:38.669002+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:39.669337+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998179 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:40.669559+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:41.669753+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:42.670000+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:43.670178+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.818011284s of 13.826013565s, submitted: 2
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:44.670342+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:45.670481+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:46.670639+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:47.670809+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:48.670983+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:49.671157+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:50.671272+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:51.671395+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:52.671514+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:53.671684+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:54.671823+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:55.672046+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:56.672232+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:57.672398+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:58.672566+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:59.672810+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:00.672928+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:01.673054+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:02.673202+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:03.673353+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805ac0a000 session 0x55805cc80b40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a935800 session 0x55805cc7c1e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:04.673483+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:05.673636+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:06.673755+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:07.673953+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:08.674168+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:09.674424+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:10.674549+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:11.674697+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:12.674850+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:13.675097+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:14.675257+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:15.675427+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:16.675578+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 33.236343384s of 33.268554688s, submitted: 1
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:17.675709+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:18.675849+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:19.676031+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998179 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:20.676178+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d650800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:21.676318+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:22.676424+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:23.676585+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d651c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 1589248 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:24.676730+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001203 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 1589248 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:25.676932+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:26.677052+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:27.677302+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.904835701s of 10.988478661s, submitted: 3
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:28.677447+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:29.677646+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000480 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:30.677751+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:31.677928+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:32.678163+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:33.678323+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:34.678539+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9000 session 0x55805d862f00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a7f0000 session 0x55805cc7cb40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000480 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:35.678705+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:36.678948+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:37.679149+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:38.679289+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:39.679503+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000480 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:40.679672+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:41.679834+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:42.680002+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:43.680150+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:44.680313+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000480 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:45.680486+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a7f0000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.852489471s of 18.009616852s, submitted: 2
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:46.680623+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 1572864 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:47.680785+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 1572864 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:48.680936+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 1572864 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:49.681109+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000612 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:50.681277+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:51.681400+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:52.681567+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:53.681693+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:54.681937+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999430 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:55.682144+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:56.682388+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:57.682583+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:58.682733+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:59.682975+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.888288498s of 13.898387909s, submitted: 3
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999298 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:00.683139+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:01.683579+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:02.683704+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:03.683849+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:04.684219+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:05.684370+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999298 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:06.684571+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:07.685062+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:08.685894+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:09.686099+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:10.686253+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999298 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:11.686433+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d651c00 session 0x55805d564f00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c400 session 0x55805b7d74a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:12.686652+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:13.686838+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:14.687091+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.887771606s of 14.891558647s, submitted: 1
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [0,0,2])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:15.687323+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999370 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 1425408 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:16.687475+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 65536 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:17.687648+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:18.688132+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:19.688468+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:20.689204+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999298 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:21.689899+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:22.690070+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a935800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:23.690344+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:24.690587+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:25.690796+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999430 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:26.690954+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:27.691136+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:28.691357+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.794444084s of 14.003565788s, submitted: 364
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:29.691690+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:30.691927+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000942 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:31.692104+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:32.692326+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:33.692459+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:34.692617+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a7f0000 session 0x55805c455c20
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:35.692820+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000942 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:36.692967+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:37.693110+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:38.693254+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:39.693594+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:40.694018+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000810 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:41.694844+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:42.695227+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:43.696085+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:44.696217+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:45.696452+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.896261215s of 16.902111053s, submitted: 2
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000942 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:46.696811+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:47.697445+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:48.697772+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:49.698015+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:50.698250+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002454 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:51.698484+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:52.698834+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:53.699006+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:54.699520+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:55.699791+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001863 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:56.699944+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:57.700153+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:58.700368+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:59.700639+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:00.700816+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001863 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.281729698s of 15.294480324s, submitted: 3
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:01.701077+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:02.701254+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:03.701451+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:04.701617+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:05.701743+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001731 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:06.701927+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:07.702104+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:08.702296+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:09.702564+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:10.702754+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001731 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:11.702899+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:12.703048+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:13.703201+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:14.703328+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:15.703453+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001731 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:16.703590+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:17.703816+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:18.704006+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:19.704218+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:20.704339+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001731 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d650800 session 0x55805c455a40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d6b7400 session 0x55805d4ae1e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:21.704480+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:22.704611+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:23.704712+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:24.704895+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:25.705010+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001731 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:26.705142+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:27.705267+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:28.705393+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9000 session 0x55805c7f0000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a935800 session 0x55805c7f10e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:29.705530+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:30.705671+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001731 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:31.705822+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a7f0000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.912832260s of 30.916051865s, submitted: 1
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:32.705926+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:33.706119+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:34.706202+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:35.706366+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001863 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:36.706521+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:37.706667+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:38.706759+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:39.706946+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d650800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:40.707091+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001995 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:41.707250+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:42.707412+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:43.707590+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:44.707755+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:45.707970+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001995 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:46.708153+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d651c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.676693916s of 14.759789467s, submitted: 2
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:47.708316+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:48.708512+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:49.708714+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:50.708890+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9c00 session 0x55805cc7eb40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004296 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:51.709053+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:52.709185+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:53.709931+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:54.710126+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:55.710266+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004296 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:56.710413+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:57.710561+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.021399498s of 11.037414551s, submitted: 4
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:58.710713+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:59.710925+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:00.711060+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004164 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:01.711210+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:02.711364+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:03.711495+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:04.711727+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:05.711917+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007320 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:06.712081+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:07.712211+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805ac0a000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.964635849s of 10.006252289s, submitted: 4
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:08.712323+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:09.712506+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:10.712624+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007650 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:11.712760+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:12.712914+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:13.713065+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:14.713201+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:15.713325+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:16.713433+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:17.713557+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:18.713736+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:19.713968+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:20.714086+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:21.714249+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:22.714384+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:23.714533+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:24.714675+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:25.714844+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:26.715011+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:27.715144+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:28.715285+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:29.715481+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:30.715621+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:31.715736+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:32.715898+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:33.716032+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:34.716154+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:35.716364+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:36.716543+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:37.716682+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:38.716850+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:39.717158+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:40.717312+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:41.717457+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:42.717697+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c400 session 0x55805cc80780
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a7f0000 session 0x55805b7f4d20
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:43.717991+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:44.718137+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:45.718333+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:46.718518+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:47.718705+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:48.718852+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:49.719096+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:50.719237+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:51.719434+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:52.719618+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:53.719807+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a7f0000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 46.184024811s of 46.203655243s, submitted: 4
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:54.720111+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:55.720261+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007650 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:56.720424+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:57.720648+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:58.720842+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:59.721094+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a935800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:00.721260+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007650 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:01.721402+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:02.721519+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d651c00 session 0x55805d5652c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d650800 session 0x55805c7ef0e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:03.721723+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:04.721915+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:05.722134+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.059599876s of 12.068504333s, submitted: 2
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006468 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:06.722429+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:07.722566+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:08.722746+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:09.722974+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:10.723149+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006468 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:11.723296+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:12.723442+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:13.723559+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:14.723751+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:15.723932+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 142 handle_osd_map epochs [143,144], i have 142, src has [1,144]
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.084339142s of 10.095458031s, submitted: 3
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1016839 data_alloc: 218103808 data_used: 266240
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88047616 unmapped: 2088960 heap: 90136576 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:16.724064+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _renew_subs
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 145 ms_handle_reset con 0x55805a9f9c00 session 0x55805d4aeb40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 2031616 heap: 90136576 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:17.724216+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fc5dc000/0x0/0x4ffc00000, data 0x170fbd/0x22e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88227840 unmapped: 18694144 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:18.724362+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _renew_subs
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 146 ms_handle_reset con 0x55805b24c400 session 0x55805b69cf00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:19.724546+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:20.724670+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134726 data_alloc: 218103808 data_used: 274432
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:21.724854+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x11730f8/0x1233000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:22.725149+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:23.725304+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x11730f8/0x1233000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:24.725416+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:25.725534+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137156 data_alloc: 218103808 data_used: 274432
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:26.725742+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88276992 unmapped: 18644992 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:27.725909+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88276992 unmapped: 18644992 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:28.726034+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88276992 unmapped: 18644992 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:29.726219+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.137514114s of 14.367403030s, submitted: 61
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:30.726373+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137024 data_alloc: 218103808 data_used: 274432
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:31.726509+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:32.726641+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:33.726828+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:34.727061+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805a9f9000 session 0x55805d4afe00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:35.727263+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137024 data_alloc: 218103808 data_used: 274432
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:36.727500+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:37.727693+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:38.727787+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:39.727975+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:40.728107+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137024 data_alloc: 218103808 data_used: 274432
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:41.728339+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:42.728547+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:43.728660+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:44.728801+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:45.728937+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.764553070s of 15.769596100s, submitted: 1
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136316 data_alloc: 218103808 data_used: 274432
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:46.729101+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:47.729256+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d6000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88301568 unmapped: 18620416 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:48.729394+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:49.729569+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:50.729708+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d6000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139340 data_alloc: 218103808 data_used: 274432
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:51.729936+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:52.730310+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:53.730779+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:54.731144+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:55.731469+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138749 data_alloc: 218103808 data_used: 274432
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:56.731714+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d6000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:57.731883+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:58.732054+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:59.732275+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:00.732404+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d6000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138749 data_alloc: 218103808 data_used: 274432
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:01.733563+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:02.734609+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805b24c400 session 0x55805c4534a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d650800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805d650800 session 0x55805d3f14a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d6000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d651c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805d651c00 session 0x55805d4721e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:03.735545+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.763429642s of 17.799776077s, submitted: 4
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d740000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805d740000 session 0x55805b7d6b40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d64a800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805d64a800 session 0x55805d3f05a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:04.736214+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88326144 unmapped: 18595840 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805b24c400 session 0x55805d8be960
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d650800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:05.736571+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88326144 unmapped: 18595840 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138769 data_alloc: 218103808 data_used: 278528
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:06.737273+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88326144 unmapped: 18595840 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _renew_subs
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 147 handle_osd_map epochs [148,148], i have 148, src has [1,148]
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:07.737760+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88334336 unmapped: 18587648 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb5d1000/0x0/0x4ffc00000, data 0x11771be/0x123a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:08.738110+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88334336 unmapped: 18587648 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:09.738289+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _renew_subs
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 18579456 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 149 ms_handle_reset con 0x55805d650800 session 0x55805d92ad20
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d651c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 149 ms_handle_reset con 0x55805d651c00 session 0x55805d8bfa40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d740000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 149 ms_handle_reset con 0x55805d740000 session 0x55805a6734a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f1c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 149 ms_handle_reset con 0x55805e2f1c00 session 0x55805cfb3a40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 149 ms_handle_reset con 0x55805b24c400 session 0x55805a7e7680
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:10.738437+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 21446656 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa6d5000/0x0/0x4ffc00000, data 0x207136b/0x2136000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1268233 data_alloc: 218103808 data_used: 278528
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:11.738581+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa6d5000/0x0/0x4ffc00000, data 0x207136b/0x2136000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 21446656 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:12.738723+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 21446656 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:13.738897+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 21446656 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d650800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 149 ms_handle_reset con 0x55805d650800 session 0x55805cc7cd20
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d651c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.449963570s of 10.389714241s, submitted: 77
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d740000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:14.739067+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 21430272 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:15.739185+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 90390528 unmapped: 20733952 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa6d5000/0x0/0x4ffc00000, data 0x207136b/0x2136000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365949 data_alloc: 234881024 data_used: 14716928
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:16.739346+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:17.739468+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:18.739596+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:19.739840+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa6d5000/0x0/0x4ffc00000, data 0x207136b/0x2136000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:20.740013+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:21.740300+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367067 data_alloc: 234881024 data_used: 14716928
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:22.740550+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:23.740740+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:24.741511+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa6d2000/0x0/0x4ffc00000, data 0x207333d/0x2139000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:25.742016+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:26.742201+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367675 data_alloc: 234881024 data_used: 14733312
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.585947990s of 12.599705696s, submitted: 21
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:27.742316+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114442240 unmapped: 876544 heap: 115318784 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:28.742460+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8928000/0x0/0x4ffc00000, data 0x2c7e33d/0x2d44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [0,0,0,0,0,8])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 1097728 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88fe000/0x0/0x4ffc00000, data 0x2ca733d/0x2d6d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:29.742625+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 3555328 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:30.742755+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 3555328 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:31.743767+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475763 data_alloc: 234881024 data_used: 16691200
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 3555328 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:32.744401+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 3555328 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:33.744680+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 3555328 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88f3000/0x0/0x4ffc00000, data 0x2cb333d/0x2d79000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:34.744854+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 3522560 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88f0000/0x0/0x4ffc00000, data 0x2cb633d/0x2d7c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:35.745017+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 3522560 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:36.745233+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475931 data_alloc: 234881024 data_used: 16703488
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 3522560 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:37.745429+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113909760 unmapped: 3506176 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88f0000/0x0/0x4ffc00000, data 0x2cb633d/0x2d7c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:38.745924+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113917952 unmapped: 3497984 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:39.746221+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113917952 unmapped: 3497984 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:40.746539+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113917952 unmapped: 3497984 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.262884140s of 14.072373390s, submitted: 141
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:41.746804+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476763 data_alloc: 234881024 data_used: 16764928
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113934336 unmapped: 3481600 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88ef000/0x0/0x4ffc00000, data 0x2cb733d/0x2d7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:42.746980+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113934336 unmapped: 3481600 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:43.747210+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 3473408 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:44.747341+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88ef000/0x0/0x4ffc00000, data 0x2cb733d/0x2d7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 3473408 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:45.747540+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 3473408 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:46.747676+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476763 data_alloc: 234881024 data_used: 16764928
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 3473408 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88ef000/0x0/0x4ffc00000, data 0x2cb733d/0x2d7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:47.747852+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 3440640 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:48.748150+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 3440640 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:49.748307+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 3440640 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:50.748464+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805daf9c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9c00 session 0x55805cc7c1e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805daf9800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 3440640 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9800 session 0x55805cc80000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:51.748638+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476259 data_alloc: 234881024 data_used: 16764928
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88ef000/0x0/0x4ffc00000, data 0x2cb733d/0x2d7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 3440640 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805daf9400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.868956566s of 10.927964211s, submitted: 6
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9400 session 0x55805c7f03c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805b24c400 session 0x55805cd7c1e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d650800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d650800 session 0x55805d92a5a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805daf9800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9800 session 0x55805d92bc20
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805daf9c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9c00 session 0x55805a7e7e00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:52.748818+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 14835712 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7c62000/0x0/0x4ffc00000, data 0x394339f/0x3a0a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:53.748927+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0c00 session 0x55805d4af4a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 14835712 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:54.749077+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 14835712 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:55.749232+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805b24c400 session 0x55805cd1d4a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 14835712 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:56.749364+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1570244 data_alloc: 234881024 data_used: 16769024
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 14835712 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d650800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d650800 session 0x55805cc80f00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0c00 session 0x55805a7e5860
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:57.749536+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805daf9800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805daf9c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 15720448 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:58.749621+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7c3c000/0x0/0x4ffc00000, data 0x39673d2/0x3a30000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114491392 unmapped: 14475264 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:59.749841+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:00.749991+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:01.750154+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1665355 data_alloc: 234881024 data_used: 26218496
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:02.750300+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:03.750431+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:04.750597+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7c3c000/0x0/0x4ffc00000, data 0x39673d2/0x3a30000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:05.750776+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:06.750915+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1665355 data_alloc: 234881024 data_used: 26218496
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:07.751047+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:08.751184+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121716736 unmapped: 7249920 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.360017776s of 17.521881104s, submitted: 38
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:09.751354+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 127156224 unmapped: 6299648 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7c3c000/0x0/0x4ffc00000, data 0x39673d2/0x3a30000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f6c19000/0x0/0x4ffc00000, data 0x498a3d2/0x4a53000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:10.751497+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 7643136 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:11.751608+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1796399 data_alloc: 234881024 data_used: 26443776
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f6bf8000/0x0/0x4ffc00000, data 0x49ab3d2/0x4a74000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:12.751754+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:13.751960+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:14.752083+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:15.752236+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:16.752354+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1796687 data_alloc: 234881024 data_used: 26435584
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f6bf7000/0x0/0x4ffc00000, data 0x49ab3d2/0x4a74000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:17.752482+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:18.752672+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9800 session 0x55805a6730e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9c00 session 0x55805a7e6b40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805daf9c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.716887474s of 10.034674644s, submitted: 127
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:19.752933+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9c00 session 0x55805d8be3c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 17383424 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:20.753066+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 17383424 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f851c000/0x0/0x4ffc00000, data 0x2cb733d/0x2d7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:21.753216+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1492452 data_alloc: 234881024 data_used: 12238848
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 17383424 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:22.753342+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 17383424 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:23.753523+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f851b000/0x0/0x4ffc00000, data 0x2cb833d/0x2d7e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 17383424 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:24.753682+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 17383424 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d651c00 session 0x55805d25e000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d740000 session 0x55805cf8d4a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:25.753815+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805b24c400 session 0x55805d8be960
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 26722304 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88ee000/0x0/0x4ffc00000, data 0x2cb833d/0x2d7e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:26.753989+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180934 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 26722304 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805ac0a000 session 0x55805c454b40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805cfb2f00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:27.754123+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 26722304 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:28.754255+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 26722304 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:29.754408+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:30.754543+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805b69d2c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:31.754680+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180934 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:32.754828+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:33.754969+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:34.755144+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:35.755261+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:36.755377+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180934 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:37.755532+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805ac0a000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.443714142s of 18.708480835s, submitted: 87
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:38.755692+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:39.755856+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:40.756091+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:41.756263+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181066 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:42.756406+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:43.756537+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:44.756722+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:45.756848+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 33972224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805cd7c3c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0c00 session 0x55805cd7cf00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8e1c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8e1c00 session 0x55805b7f5c20
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805b7f5a40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b7f4780
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:46.757007+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207078 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 33972224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:47.757152+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 33972224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:48.757324+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 33972224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa062000/0x0/0x4ffc00000, data 0x15452db/0x160a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:49.757518+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805b7f43c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 33972224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0c00 session 0x55805b2a5a40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:50.757677+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8e1400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 33972224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8e1400 session 0x55805b2a4b40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.920416832s of 12.955449104s, submitted: 4
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805b2a4960
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:51.757819+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211460 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 34390016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c2d000/0x0/0x4ffc00000, data 0x15692eb/0x162f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:52.757954+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 34390016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:53.758065+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 34390016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:54.758186+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 34390016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c2d000/0x0/0x4ffc00000, data 0x15692eb/0x162f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:55.758286+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c2d000/0x0/0x4ffc00000, data 0x15692eb/0x162f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 34390016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:56.758437+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c2d000/0x0/0x4ffc00000, data 0x15692eb/0x162f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238384 data_alloc: 218103808 data_used: 4263936
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:57.758549+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:58.758641+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:59.758770+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:00.758975+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:01.759134+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238384 data_alloc: 218103808 data_used: 4263936
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:02.759295+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c2d000/0x0/0x4ffc00000, data 0x15692eb/0x162f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:03.759453+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.594253540s of 12.610000610s, submitted: 4
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109379584 unmapped: 31424512 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9a24000/0x0/0x4ffc00000, data 0x17722eb/0x1838000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:04.759634+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9a07000/0x0/0x4ffc00000, data 0x178f2eb/0x1855000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 31088640 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:05.759792+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 31088640 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:06.759939+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264450 data_alloc: 218103808 data_used: 4370432
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 31088640 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99ff000/0x0/0x4ffc00000, data 0x17972eb/0x185d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:07.760082+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 31088640 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:08.760681+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 31088640 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:09.760942+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 31088640 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:10.761072+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 32440320 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:11.761238+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263650 data_alloc: 218103808 data_used: 4370432
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 32440320 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:12.761381+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fd000/0x0/0x4ffc00000, data 0x17992eb/0x185f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 32440320 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:13.761519+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 32440320 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:14.761635+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 32440320 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:15.761812+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 32440320 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.573128700s of 12.650348663s, submitted: 29
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:16.761972+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263874 data_alloc: 218103808 data_used: 4370432
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:17.762125+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fc000/0x0/0x4ffc00000, data 0x179a2eb/0x1860000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:18.762275+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:19.762447+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:20.762617+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:21.762842+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263874 data_alloc: 218103808 data_used: 4370432
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fc000/0x0/0x4ffc00000, data 0x179a2eb/0x1860000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:22.763044+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:23.763157+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fc000/0x0/0x4ffc00000, data 0x179a2eb/0x1860000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:24.763347+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:25.763474+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:26.763601+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263874 data_alloc: 218103808 data_used: 4370432
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:27.763733+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fc000/0x0/0x4ffc00000, data 0x179a2eb/0x1860000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fc000/0x0/0x4ffc00000, data 0x179a2eb/0x1860000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:28.763931+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fc000/0x0/0x4ffc00000, data 0x179a2eb/0x1860000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805cfb3680
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805cd74000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:29.764187+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.208628654s of 13.212368965s, submitted: 1
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0c00 session 0x55805b7f43c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:30.764425+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:31.764632+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186319 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:32.764826+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:33.764986+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:34.765122+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:35.765299+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:36.765511+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186319 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:37.765659+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:38.765810+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:39.766018+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:40.766151+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:41.766275+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186319 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:42.766404+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:43.766531+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:44.766650+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:45.766970+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:46.767129+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186319 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:47.767323+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:48.767491+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:49.767660+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8e1000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8e1000 session 0x55805cfb2f00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d8be3c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805a7e6b40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805d25e000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.695281982s of 20.782047272s, submitted: 14
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:50.767834+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0c00 session 0x55805d25e5a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dad9400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad9400 session 0x55805a673860
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805a672960
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d8be780
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805cfb34a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 35627008 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:51.767980+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251055 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 35627008 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:52.768233+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 35627008 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9955000/0x0/0x4ffc00000, data 0x184034d/0x1907000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:53.768387+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 35627008 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:54.768558+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 35627008 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0c00 session 0x55805a7e74a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:55.768711+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9955000/0x0/0x4ffc00000, data 0x184034d/0x1907000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dad8400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8400 session 0x55805b435c20
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 35627008 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:56.768891+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9955000/0x0/0x4ffc00000, data 0x184034d/0x1907000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d56d4a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251055 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 34578432 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d92b0e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dad8400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:57.768998+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8400 session 0x55805d92a5a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805b69d2c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 34537472 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:58.769135+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01a000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 34537472 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01a000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:59.769293+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 34537472 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:00.769418+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0c00 session 0x55805b69cb40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d567c20
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d567680
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dad8400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8400 session 0x55805d566780
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 34570240 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.721254349s of 10.977932930s, submitted: 86
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:01.769537+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805d5663c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dad9c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad9c00 session 0x55805d25ef00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d25fe00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d56d4a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dad8400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8400 session 0x55805d56de00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241073 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 33996800 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:02.769700+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 33996800 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:03.769844+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 33996800 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:04.770031+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805d92a5a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9af4000/0x0/0x4ffc00000, data 0x16a22eb/0x1768000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 33996800 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dad9800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad9800 session 0x55805a7e6b40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:05.770149+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805a7e74a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805cfb3680
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106815488 unmapped: 33988608 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:06.770273+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dad8400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242887 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106815488 unmapped: 33988608 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:07.770401+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9af3000/0x0/0x4ffc00000, data 0x16a22fb/0x1769000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:08.770560+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:09.770706+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:10.770841+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:11.770958+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1279063 data_alloc: 218103808 data_used: 5537792
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9af3000/0x0/0x4ffc00000, data 0x16a22fb/0x1769000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:12.771098+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:13.771226+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9af3000/0x0/0x4ffc00000, data 0x16a22fb/0x1769000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:14.771365+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9af3000/0x0/0x4ffc00000, data 0x16a22fb/0x1769000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:15.771477+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9af3000/0x0/0x4ffc00000, data 0x16a22fb/0x1769000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107864064 unmapped: 32940032 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:16.771592+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1279063 data_alloc: 218103808 data_used: 5537792
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107864064 unmapped: 32940032 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:17.771728+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.606376648s of 16.673978806s, submitted: 14
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 28213248 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:18.771890+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113213440 unmapped: 27590656 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:19.772043+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 27705344 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9099000/0x0/0x4ffc00000, data 0x20f42fb/0x21bb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:20.772140+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 27705344 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:21.772299+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1368463 data_alloc: 218103808 data_used: 6819840
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 27705344 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:22.772416+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 27697152 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:23.772528+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9099000/0x0/0x4ffc00000, data 0x20f42fb/0x21bb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 27697152 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:24.772800+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 27697152 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:25.772950+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9099000/0x0/0x4ffc00000, data 0x20f42fb/0x21bb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 28254208 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:26.773131+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362511 data_alloc: 218103808 data_used: 6823936
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 28254208 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:27.773279+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 28254208 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:28.773411+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 28254208 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:29.773552+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 28254208 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:30.773683+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 28246016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:31.773800+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.666546822s of 14.050541878s, submitted: 114
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f909e000/0x0/0x4ffc00000, data 0x20f72fb/0x21be000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362735 data_alloc: 218103808 data_used: 6823936
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 28246016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:32.773854+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 28237824 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:33.773977+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 28237824 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dad8800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:34.774080+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8800 session 0x55805b434780
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacac00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d92ab40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacb000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacb000 session 0x55805cd745a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805cd74f00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d8bfe00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 27705344 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:35.774224+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cc1000/0x0/0x4ffc00000, data 0x24d335d/0x259b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 27672576 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:36.774377+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1396522 data_alloc: 218103808 data_used: 6823936
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 27672576 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:37.774547+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 27672576 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:38.774715+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 27672576 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:39.774960+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cc1000/0x0/0x4ffc00000, data 0x24d335d/0x259b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 27639808 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:40.775082+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacac00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d8be5a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cc1000/0x0/0x4ffc00000, data 0x24d335d/0x259b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dad8800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8800 session 0x55805d8bf2c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 27623424 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:41.775231+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacb400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacb400 session 0x55805d8bf860
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.050792694s of 10.149922371s, submitted: 31
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1398336 data_alloc: 218103808 data_used: 6823936
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d8be000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 27623424 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:42.775367+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacac00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 27623424 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:43.775521+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 27525120 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:44.775683+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115122176 unmapped: 25681920 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:45.775849+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cc0000/0x0/0x4ffc00000, data 0x24d336d/0x259c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115154944 unmapped: 25649152 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:46.776100+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425240 data_alloc: 234881024 data_used: 10756096
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115163136 unmapped: 25640960 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:47.776222+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115163136 unmapped: 25640960 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:48.776399+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115171328 unmapped: 25632768 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:49.776582+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cc0000/0x0/0x4ffc00000, data 0x24d336d/0x259c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115171328 unmapped: 25632768 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:50.776720+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cc0000/0x0/0x4ffc00000, data 0x24d336d/0x259c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 25608192 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:51.776945+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425048 data_alloc: 234881024 data_used: 10756096
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 25608192 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:52.777062+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 25608192 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:53.777208+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.226175308s of 12.244213104s, submitted: 6
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 25608192 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:54.777333+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8b1f000/0x0/0x4ffc00000, data 0x267436d/0x273d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118652928 unmapped: 22151168 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:55.777477+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 22839296 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:56.777587+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494334 data_alloc: 234881024 data_used: 11829248
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 22831104 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:57.777724+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 22831104 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:58.777937+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 22831104 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:59.778101+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 22765568 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:00.778331+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f84ce000/0x0/0x4ffc00000, data 0x2cbd36d/0x2d86000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 22765568 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:01.778575+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1488982 data_alloc: 234881024 data_used: 11833344
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 22765568 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:02.778794+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118046720 unmapped: 22757376 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:03.778954+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b7f4f00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805c7f0960
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dad8800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118046720 unmapped: 22757376 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:04.779126+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.527749062s of 10.052300453s, submitted: 105
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 11K writes, 43K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 3004 syncs, 3.82 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2301 writes, 7858 keys, 2301 commit groups, 1.0 writes per commit group, ingest: 8.46 MB, 0.01 MB/s
                                           Interval WAL: 2301 writes, 911 syncs, 2.53 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f84d3000/0x0/0x4ffc00000, data 0x2cc036d/0x2d89000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [1])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8800 session 0x55805d92af00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 24436736 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:05.779271+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:06.779440+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 24436736 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1371249 data_alloc: 218103808 data_used: 6823936
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:07.779611+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 24436736 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:08.779713+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 24436736 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8400 session 0x55805d8be3c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805b7d7860
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cbd000/0x0/0x4ffc00000, data 0x20f92fb/0x21c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:09.779906+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d4721e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:10.780056+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:11.781071+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215158 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:12.781198+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:13.782142+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:14.782310+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:15.782473+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:16.782646+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215158 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:17.782833+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:18.782976+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:19.783444+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:20.783576+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:21.783715+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215158 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:22.783851+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:23.784112+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:24.784246+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:25.784397+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:26.784707+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215158 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:27.784861+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:28.785071+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:29.785342+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:30.785560+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:31.785736+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215158 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:32.785995+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:33.786259+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:34.786455+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:35.786631+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:36.786813+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:37.787060+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215158 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110673920 unmapped: 30130176 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:38.787233+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110673920 unmapped: 30130176 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.834026337s of 34.967418671s, submitted: 43
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:39.787399+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b7d6780
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacac00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d8ad4a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dad8800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8800 session 0x55805d8ac3c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d8ad680
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d8ad2c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 29532160 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:40.787555+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 29532160 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:41.787726+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 29532160 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:42.787939+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233960 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 29532160 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9ea6000/0x0/0x4ffc00000, data 0x12f12db/0x13b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:43.788098+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 29532160 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:44.788243+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 29532160 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacac00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d8ad860
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:45.789805+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacb800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:46.789958+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:47.790094+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244525 data_alloc: 218103808 data_used: 1630208
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9ea6000/0x0/0x4ffc00000, data 0x12f12db/0x13b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:48.790263+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:49.790436+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:50.790594+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:51.790768+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:52.790984+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244525 data_alloc: 218103808 data_used: 1630208
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 29499392 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9ea6000/0x0/0x4ffc00000, data 0x12f12db/0x13b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:53.791183+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 29499392 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:54.791329+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9ea6000/0x0/0x4ffc00000, data 0x12f12db/0x13b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 29499392 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:55.791456+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 29499392 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:56.791569+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.569715500s of 17.644144058s, submitted: 15
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112713728 unmapped: 28090368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:57.791736+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335881 data_alloc: 218103808 data_used: 1634304
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 26198016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:58.791919+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91fb000/0x0/0x4ffc00000, data 0x1f9c2db/0x2061000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [0,0,1])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115662848 unmapped: 25141248 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:59.792064+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ed000/0x0/0x4ffc00000, data 0x1faa2db/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115662848 unmapped: 25141248 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:00.792222+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:01.792365+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ed000/0x0/0x4ffc00000, data 0x1faa2db/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:02.792478+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350955 data_alloc: 218103808 data_used: 2863104
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:03.792596+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:04.792704+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:05.792843+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:06.793034+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ed000/0x0/0x4ffc00000, data 0x1faa2db/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:07.793232+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350971 data_alloc: 218103808 data_used: 2863104
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:08.793412+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 25124864 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:09.793624+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 25124864 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:10.793848+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 25124864 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:11.794172+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ed000/0x0/0x4ffc00000, data 0x1faa2db/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 25116672 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:12.794365+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350971 data_alloc: 218103808 data_used: 2863104
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 25116672 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:13.794520+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 25116672 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:14.794644+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 25116672 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:15.794767+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:16.794947+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:17.795057+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351123 data_alloc: 218103808 data_used: 2867200
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ed000/0x0/0x4ffc00000, data 0x1faa2db/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:18.795206+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ed000/0x0/0x4ffc00000, data 0x1faa2db/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:19.795435+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:20.795585+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:21.795724+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacbc00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacbc00 session 0x55805d56d680
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805d56d4a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d56de00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:22.795843+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d25f860
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351123 data_alloc: 218103808 data_used: 2867200
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.964529037s of 25.611534119s, submitted: 88
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805d25ef00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacac00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d567c20
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacbc00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacbc00 session 0x55805d566780
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d5663c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b69d2c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 33177600 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:23.795992+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 33177600 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:24.796164+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 33177600 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f855c000/0x0/0x4ffc00000, data 0x2c3a2eb/0x2d00000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:25.796278+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 33177600 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:26.796366+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 33177600 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:27.796494+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1437857 data_alloc: 218103808 data_used: 2867200
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f855c000/0x0/0x4ffc00000, data 0x2c3a2eb/0x2d00000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 33177600 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805d4afe00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:28.796631+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacac00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114991104 unmapped: 33161216 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:29.796792+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 27746304 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:30.796917+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 25231360 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:31.797055+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 25231360 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:32.797180+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528574 data_alloc: 234881024 data_used: 14991360
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 25223168 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:33.797291+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f855a000/0x0/0x4ffc00000, data 0x2c3b2eb/0x2d01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 25190400 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:34.797433+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f855a000/0x0/0x4ffc00000, data 0x2c3b2eb/0x2d01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123002880 unmapped: 25149440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:35.797584+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123002880 unmapped: 25149440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:36.797719+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123002880 unmapped: 25149440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:37.797977+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528574 data_alloc: 234881024 data_used: 14991360
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f855a000/0x0/0x4ffc00000, data 0x2c3b2eb/0x2d01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123002880 unmapped: 25149440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:38.798127+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123002880 unmapped: 25149440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:39.798312+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.176643372s of 17.280221939s, submitted: 17
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123772928 unmapped: 24379392 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:40.798459+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123772928 unmapped: 24379392 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:41.798589+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7989000/0x0/0x4ffc00000, data 0x380d2eb/0x38d3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 24338432 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:42.798781+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616990 data_alloc: 234881024 data_used: 15196160
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 24338432 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:43.798986+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 24338432 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:44.799169+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 24338432 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:45.799353+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 24338432 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:46.799496+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 24330240 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:47.799689+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616990 data_alloc: 234881024 data_used: 15196160
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 24330240 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:48.799840+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 24330240 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:49.800192+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 24322048 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:50.800466+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 24322048 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:51.800627+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 24322048 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:52.800924+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616990 data_alloc: 234881024 data_used: 15196160
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 24322048 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:53.801073+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 24322048 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:54.801231+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 24322048 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:55.801379+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:56.801619+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:57.801826+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a935800 session 0x55805d56c780
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a7f0000 session 0x55805b7f21e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616990 data_alloc: 234881024 data_used: 15196160
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:58.801996+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:59.802176+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:00.802327+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:01.802466+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:02.802599+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616990 data_alloc: 234881024 data_used: 15196160
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:03.802781+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:04.802930+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:05.803062+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:06.803226+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:07.803418+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616990 data_alloc: 234881024 data_used: 15196160
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:08.803519+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a935800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.359371185s of 28.531023026s, submitted: 61
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:09.803692+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:10.803839+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 24297472 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:11.803980+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 24289280 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:12.804130+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1617122 data_alloc: 234881024 data_used: 15196160
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:13.804284+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 24289280 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:14.804400+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 24289280 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:15.804624+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 24215552 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:16.804726+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 24076288 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:17.804942+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 22904832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616282 data_alloc: 234881024 data_used: 15196160
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:18.805155+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 22904832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:19.805330+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 22896640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:20.805474+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 22896640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:21.805619+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 22888448 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:22.805790+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 22888448 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616282 data_alloc: 234881024 data_used: 15196160
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:23.805917+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125272064 unmapped: 22880256 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.529578209s of 15.271432877s, submitted: 389
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:24.806037+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 23928832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:25.806179+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 23928832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:26.806333+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 23928832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:27.806497+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 23928832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616150 data_alloc: 234881024 data_used: 15196160
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:28.806620+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 23928832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:29.806778+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 23920640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:30.806909+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 23920640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:31.806989+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 23920640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:32.807156+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 23920640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616150 data_alloc: 234881024 data_used: 15196160
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:33.807289+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 23920640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:34.807434+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 23920640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:35.807556+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124239872 unmapped: 23912448 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d566000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.931664467s of 11.936676025s, submitted: 1
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc400 session 0x55805d863e00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805c453e00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1fab2db/0x2070000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:36.807998+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 30932992 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:37.808154+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 30932992 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1360030 data_alloc: 218103808 data_used: 2863104
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:38.808346+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 30932992 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:39.808576+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 30932992 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:40.808730+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1fab2db/0x2070000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 30932992 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:41.808954+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 30932992 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805d25e000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacb800 session 0x55805d3f14a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1fab2db/0x2070000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacb800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:42.809267+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacb800 session 0x55805d4afc20
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238042 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:43.809397+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:44.809544+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:45.809683+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:46.809839+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:47.809957+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238042 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:48.810082+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:49.810254+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:50.810460+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:51.810587+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:52.810754+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238042 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:53.810928+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:54.811074+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:55.811251+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:56.811439+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:57.811519+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238042 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:58.811624+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:59.811780+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:00.811965+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:01.812079+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:02.812264+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238042 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:03.812482+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d4ae000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc400 session 0x55805d862d20
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacac00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805c6323c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805d4ae960
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:04.812656+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.715570450s of 28.907997131s, submitted: 67
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115310592 unmapped: 32841728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b69cb40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc400 session 0x55805d4afa40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacac00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d3f10e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacb800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacb800 session 0x55805a673860
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805d8623c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:05.812778+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115326976 unmapped: 32825344 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:06.812934+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115326976 unmapped: 32825344 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9e88000/0x0/0x4ffc00000, data 0x130f2db/0x13d4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d8ad0e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:07.813077+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc400 session 0x55805d5641e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115326976 unmapped: 32825344 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252852 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805cfb2000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacac00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:08.813221+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 33480704 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805b7f4960
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:09.813378+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 33480704 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacb800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d7da000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:10.813581+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 33480704 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9e87000/0x0/0x4ffc00000, data 0x130f2eb/0x13d5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:11.813749+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 33480704 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:12.813951+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 33480704 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258466 data_alloc: 218103808 data_used: 815104
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9e87000/0x0/0x4ffc00000, data 0x130f2eb/0x13d5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacb800 session 0x55805cd752c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805c3fcb40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:13.814129+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 33480704 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b7d61e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:14.814341+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:15.814490+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:16.814646+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:17.814790+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240867 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:18.814929+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:19.815203+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:20.815835+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:21.816078+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:22.816305+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240867 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.481660843s of 18.543272018s, submitted: 18
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc400 session 0x55805cd7d2c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805d56de00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacac00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d56cf00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:23.816427+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805c452000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc400 session 0x55805c452780
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 33374208 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:24.816507+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 33374208 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:25.816635+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 33374208 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:26.816752+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f994c000/0x0/0x4ffc00000, data 0x184b2db/0x1910000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 33374208 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805d3f0d20
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:27.816889+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 33071104 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d7da000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d7da400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1292449 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9928000/0x0/0x4ffc00000, data 0x186f2db/0x1934000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:28.817009+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115146752 unmapped: 33005568 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:29.817213+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 31719424 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:30.817351+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 31719424 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805d5650e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da400 session 0x55805d8adc20
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:31.817546+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 31719424 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:32.817718+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 34816000 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805a673860
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244819 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:33.817833+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 34816000 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: mgrc ms_handle_reset ms_handle_reset con 0x55805cfc4c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/844402651
Nov 23 21:18:45 compute-1 ceph-osd[77613]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/844402651,v1:192.168.122.100:6801/844402651]
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: get_auth_request con 0x55805d7da400 auth_method 0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: mgrc handle_mgr_configure stats_period=5
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:34.817978+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f8400 session 0x55805d863860
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d64ac00 session 0x55805b4350e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:35.818100+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:36.818267+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:37.818459+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244819 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:38.818588+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:39.818787+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:40.818966+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:41.819116+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:42.819239+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244819 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:43.819390+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:44.819513+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:45.819649+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:46.819717+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:47.819814+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244819 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:48.819936+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:49.820048+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:50.820154+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:51.820270+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:52.820448+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244819 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:53.821144+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:54.821328+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:55.822186+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:56.822519+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:57.822901+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244819 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:58.823171+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:59.823632+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d7da000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805d25fa40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c1000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c1000 session 0x55805d25e000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805d8ada40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0400 session 0x55805d92bc20
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 37.006832123s of 37.314971924s, submitted: 21
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:00.823833+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d8ac3c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d7da000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805d56c5a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805d65dc20
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c1000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c1000 session 0x55805d3f1680
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0c00 session 0x55805d8bf2c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 34373632 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:01.824047+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db4000/0x0/0x4ffc00000, data 0x13e22eb/0x14a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 34373632 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:02.824315+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db4000/0x0/0x4ffc00000, data 0x13e22eb/0x14a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 34373632 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277031 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db4000/0x0/0x4ffc00000, data 0x13e22eb/0x14a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:03.824562+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 34373632 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:04.824991+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 34365440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:05.825346+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b7d74a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 34365440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d7da000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805cd745a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:06.825558+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 34365440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805cc80b40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:07.825781+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0c00 session 0x55805cc812c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112230400 unmapped: 35921920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db3000/0x0/0x4ffc00000, data 0x13e22fb/0x14a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278845 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c1000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d73d400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:08.826208+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 36241408 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:09.826567+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db3000/0x0/0x4ffc00000, data 0x13e22fb/0x14a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:10.826702+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:11.826846+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:12.826941+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285837 data_alloc: 218103808 data_used: 1339392
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:13.827091+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db3000/0x0/0x4ffc00000, data 0x13e22fb/0x14a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:14.827234+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:15.827494+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:16.827659+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:17.827796+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285837 data_alloc: 218103808 data_used: 1339392
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:18.827935+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:19.828134+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db3000/0x0/0x4ffc00000, data 0x13e22fb/0x14a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.161880493s of 19.237621307s, submitted: 18
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 31727616 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:20.828299+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 31547392 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:21.828424+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 31866880 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:22.828582+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 31866880 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361745 data_alloc: 218103808 data_used: 1740800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:23.828737+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 31866880 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:24.828903+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9468000/0x0/0x4ffc00000, data 0x1d1e2fb/0x1de5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 31866880 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:25.829097+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 31866880 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:26.829324+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9468000/0x0/0x4ffc00000, data 0x1d1e2fb/0x1de5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:27.830356+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356265 data_alloc: 218103808 data_used: 1740800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9453000/0x0/0x4ffc00000, data 0x1d422fb/0x1e09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:28.830754+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:29.831469+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9453000/0x0/0x4ffc00000, data 0x1d422fb/0x1e09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:30.832073+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:31.832577+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9453000/0x0/0x4ffc00000, data 0x1d422fb/0x1e09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9453000/0x0/0x4ffc00000, data 0x1d422fb/0x1e09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:32.832732+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356569 data_alloc: 218103808 data_used: 1748992
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:33.832850+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.781532288s of 14.450411797s, submitted: 111
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:34.832972+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 31866880 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:35.833057+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c1000 session 0x55805b69d4a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805d25e1e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 31842304 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b2a52c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9443000/0x0/0x4ffc00000, data 0x1d522fb/0x1e19000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:36.833416+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:37.833570+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253702 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:38.833696+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:39.833898+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:40.834123+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:41.834412+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:42.834727+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253702 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:43.834952+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:44.835094+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 31825920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:45.835271+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 31825920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:46.835428+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 31825920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:47.835641+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 31825920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253702 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:48.835773+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 31825920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:49.835991+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 31825920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:50.836189+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:51.836320+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:52.836466+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253702 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:53.836595+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:54.836734+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:55.836878+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:56.837077+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:57.837277+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d7da000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805d4730e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805cc7fe00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0c00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0c00 session 0x55805cc7f2c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805cc7ed20
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d73d400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.686355591s of 23.753026962s, submitted: 20
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805a673680
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d7da000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805a6734a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 38707200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805cc810e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fc000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc000 session 0x55805d3f1c20
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d3f01e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306052 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:58.839302+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 38699008 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:59.839475+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 38699008 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f98d4000/0x0/0x4ffc00000, data 0x18c32db/0x1988000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:00.839637+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 38699008 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:01.839779+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 38699008 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:02.839947+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 38690816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306052 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:03.840086+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 38690816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f98d4000/0x0/0x4ffc00000, data 0x18c32db/0x1988000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d73d400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:04.840256+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805a7e63c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 38690816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d7da000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:05.840424+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 38690816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:06.840614+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:07.840781+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:08.840989+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355300 data_alloc: 218103808 data_used: 7626752
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:09.841208+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f98d4000/0x0/0x4ffc00000, data 0x18c32db/0x1988000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:10.841327+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:11.841404+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:12.841514+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f98d4000/0x0/0x4ffc00000, data 0x18c32db/0x1988000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:13.841652+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355300 data_alloc: 218103808 data_used: 7626752
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:14.841829+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fc400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.395429611s of 17.444917679s, submitted: 6
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc400 session 0x55805a7e6d20
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:15.841916+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fc800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc800 session 0x55805a7e65a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fcc00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fcc00 session 0x55805cd752c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d4aef00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d73d400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805c452780
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118292480 unmapped: 37216256 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:16.842067+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d80000/0x0/0x4ffc00000, data 0x24172db/0x24dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d80000/0x0/0x4ffc00000, data 0x24172db/0x24dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120340480 unmapped: 35168256 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:17.842210+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 119513088 unmapped: 35995648 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:18.842356+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475407 data_alloc: 218103808 data_used: 7741440
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120201216 unmapped: 35307520 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:19.842536+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87dd000/0x0/0x4ffc00000, data 0x29ba2db/0x2a7f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120201216 unmapped: 35307520 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:20.842719+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120201216 unmapped: 35307520 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:21.842936+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120201216 unmapped: 35307520 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:22.843096+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fc400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc400 session 0x55805d25fa40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87dd000/0x0/0x4ffc00000, data 0x29ba2db/0x2a7f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 35291136 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:23.843287+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fc800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc800 session 0x55805d4723c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483725 data_alloc: 218103808 data_used: 7733248
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 35291136 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fd000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fd000 session 0x55805d65d680
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:24.843435+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805cd74960
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d73d400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fc400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 35282944 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87ba000/0x0/0x4ffc00000, data 0x29db30e/0x2aa2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:25.843560+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123011072 unmapped: 32497664 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:26.843720+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 25288704 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:27.843848+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87ba000/0x0/0x4ffc00000, data 0x29db30e/0x2aa2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 25255936 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:28.844013+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566132 data_alloc: 234881024 data_used: 19611648
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 25255936 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:29.844195+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 25255936 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:30.844385+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87ba000/0x0/0x4ffc00000, data 0x29db30e/0x2aa2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 25223168 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:31.844535+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 25223168 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:32.844699+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 25223168 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:33.844848+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566132 data_alloc: 234881024 data_used: 19611648
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87ba000/0x0/0x4ffc00000, data 0x29db30e/0x2aa2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 25223168 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:34.844967+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 25223168 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:35.845134+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 25223168 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:36.845295+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.900854111s of 21.241012573s, submitted: 73
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134406144 unmapped: 21102592 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:37.845453+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87ba000/0x0/0x4ffc00000, data 0x29db30e/0x2aa2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [0,0,0,0,0,0,0,16,0,27])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 133488640 unmapped: 22020096 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:38.845595+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1629540 data_alloc: 234881024 data_used: 19615744
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 133537792 unmapped: 21970944 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:39.845775+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134168576 unmapped: 21340160 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:40.845911+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134168576 unmapped: 21340160 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:41.846032+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134168576 unmapped: 21340160 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:42.846164+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f44000/0x0/0x4ffc00000, data 0x325130e/0x3318000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134201344 unmapped: 21307392 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:43.846302+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1635894 data_alloc: 234881024 data_used: 19615744
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134201344 unmapped: 21307392 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:44.846471+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134201344 unmapped: 21307392 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:45.846729+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f41000/0x0/0x4ffc00000, data 0x325430e/0x331b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134225920 unmapped: 21282816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:46.846934+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134225920 unmapped: 21282816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:47.847102+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134225920 unmapped: 21282816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:48.847284+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1634766 data_alloc: 234881024 data_used: 19615744
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f41000/0x0/0x4ffc00000, data 0x325430e/0x331b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134225920 unmapped: 21282816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.255509377s of 12.832665443s, submitted: 75
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:49.847525+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f41000/0x0/0x4ffc00000, data 0x325430e/0x331b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134250496 unmapped: 21258240 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:50.847648+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134250496 unmapped: 21258240 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:51.847771+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f41000/0x0/0x4ffc00000, data 0x325430e/0x331b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134250496 unmapped: 21258240 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:52.847925+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134250496 unmapped: 21258240 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:53.848093+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1634766 data_alloc: 234881024 data_used: 19615744
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134250496 unmapped: 21258240 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:54.848378+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805d8be1e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc400 session 0x55805d8bf0e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fc800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f3b000/0x0/0x4ffc00000, data 0x325a30e/0x3321000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [1])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc800 session 0x55805d92b0e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 28704768 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f3b000/0x0/0x4ffc00000, data 0x325a30e/0x3321000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:55.848526+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 28704768 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:56.848720+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 28704768 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:57.848965+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 28704768 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:58.849173+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416071 data_alloc: 218103808 data_used: 7733248
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805d472b40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805d5652c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 34758656 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.843377113s of 10.034677505s, submitted: 59
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:59.849368+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d565860
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9302000/0x0/0x4ffc00000, data 0x1e902db/0x1f55000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 34750464 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:00.849503+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 34750464 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:01.849709+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 34750464 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:02.849958+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 34750464 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:03.850529+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273869 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 34750464 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:04.851233+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:05.851577+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:06.851969+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:07.852154+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:08.852373+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273869 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:09.853315+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:10.854031+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:11.854597+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:12.855000+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:13.855250+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273869 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:14.855665+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:15.855953+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:16.856116+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:17.856275+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:18.856519+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273869 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:19.856788+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:20.857155+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:21.857444+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:22.857567+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:23.857698+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273869 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:24.857825+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d73d400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805a99b4a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fc400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc400 session 0x55805a99ba40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fc800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc800 session 0x55805d92a960
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d92af00
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d73d400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.899166107s of 25.910942078s, submitted: 4
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805d92ba40
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:25.857932+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805d8ac3c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fc400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc400 session 0x55805b7f4780
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fd400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fd400 session 0x55805cfb30e0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805cfb23c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:26.858074+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:27.858302+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:28.858497+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x17a52db/0x186a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321099 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:29.858682+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:30.858858+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:31.859117+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d73d400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805cfb3c20
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805cfb3860
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:32.859259+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fc400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc400 session 0x55805c7f0000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fd800
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fd800 session 0x55805c7f12c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d73d400
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:33.859408+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 35086336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321099 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:34.859539+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x17a52db/0x186a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:35.859669+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:36.859814+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x17a52db/0x186a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:37.859925+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:38.860098+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356667 data_alloc: 218103808 data_used: 5595136
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:39.860259+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:40.860370+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x17a52db/0x186a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x17a52db/0x186a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:41.860490+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:42.860639+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:43.860968+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356667 data_alloc: 218103808 data_used: 5595136
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:44.861097+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.807069778s of 19.842288971s, submitted: 9
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:45.861221+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x17a52db/0x186a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 31932416 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:46.861338+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d78000/0x0/0x4ffc00000, data 0x20002db/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:47.861603+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d78000/0x0/0x4ffc00000, data 0x20002db/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:48.861735+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1436141 data_alloc: 218103808 data_used: 6033408
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d78000/0x0/0x4ffc00000, data 0x20002db/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:49.861926+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:50.862061+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:51.862261+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:52.862392+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:53.862557+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1436141 data_alloc: 218103808 data_used: 6033408
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:54.862687+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d78000/0x0/0x4ffc00000, data 0x20002db/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:55.862906+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:56.863043+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:57.863165+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805c7f05a0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805cd7c3c0
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.674288750s of 12.870928764s, submitted: 67
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d78000/0x0/0x4ffc00000, data 0x20002db/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805cc7e000
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:58.863305+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:59.863563+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:00.863698+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:01.863912+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:02.864037+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:03.864197+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:04.864346+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:05.864532+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:06.864679+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:08.349772+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:09.349907+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:10.350137+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:11.350272+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:12.350417+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:13.350568+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:14.350691+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:15.350910+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:16.351095+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:17.351294+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:18.351422+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:19.351534+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:20.351719+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:21.351847+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:22.351992+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:23.352203+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:24.352327+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:25.352620+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:26.352716+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:27.352809+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:28.352975+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:29.353123+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:30.353280+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:31.353414+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:32.353509+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:33.353653+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:34.353824+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:35.353979+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:36.354134+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:37.354280+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:38.354416+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:39.354581+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:40.355398+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:41.356028+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:42.356585+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:43.357080+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:44.357601+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:45.358010+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:46.358293+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:47.358511+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:48.358794+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:49.359058+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:50.359381+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:51.359522+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:52.359739+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:53.359984+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:54.360242+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:55.360420+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:56.360578+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:57.360814+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:58.361008+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:59.361148+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:00.361307+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:01.361504+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:02.361656+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:03.361818+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:04.361953+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:05.362115+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:06.362275+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:07.362425+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:08.362539+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:09.362657+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:10.362795+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:11.362987+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:12.363118+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121356288 unmapped: 34152448 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: do_command 'config diff' '{prefix=config diff}'
Nov 23 21:18:45 compute-1 ceph-osd[77613]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 23 21:18:45 compute-1 ceph-osd[77613]: do_command 'config show' '{prefix=config show}'
Nov 23 21:18:45 compute-1 ceph-osd[77613]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 23 21:18:45 compute-1 ceph-osd[77613]: do_command 'counter dump' '{prefix=counter dump}'
Nov 23 21:18:45 compute-1 ceph-osd[77613]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 23 21:18:45 compute-1 ceph-osd[77613]: do_command 'counter schema' '{prefix=counter schema}'
Nov 23 21:18:45 compute-1 ceph-osd[77613]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:13.363249+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 34668544 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:18:45 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:14.363679+0000)
Nov 23 21:18:45 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 34684928 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:18:45 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:18:45 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:18:45 compute-1 ceph-osd[77613]: do_command 'log dump' '{prefix=log dump}'
Nov 23 21:18:45 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 23 21:18:45 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2141623699' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 23 21:18:45 compute-1 ceph-mon[80135]: from='client.16632 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:45 compute-1 ceph-mon[80135]: from='client.26681 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:45 compute-1 ceph-mon[80135]: pgmap v1121: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:18:45 compute-1 ceph-mon[80135]: from='client.26041 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:45 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/898839678' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 23 21:18:45 compute-1 ceph-mon[80135]: from='client.26705 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:45 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3468948448' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 23 21:18:45 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3788270769' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 23 21:18:45 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1749544287' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 23 21:18:45 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1669620096' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 23 21:18:45 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2696775381' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 23 21:18:45 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3924259925' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 23 21:18:45 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/833699972' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 23 21:18:45 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1492694451' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 23 21:18:45 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2141623699' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 23 21:18:45 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 23 21:18:45 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3079711421' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 21:18:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:18:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:46.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:18:46 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 23 21:18:46 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1895009701' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 23 21:18:46 compute-1 ceph-mon[80135]: from='client.16680 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:46 compute-1 ceph-mon[80135]: from='client.26738 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:46 compute-1 ceph-mon[80135]: from='client.26756 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:46 compute-1 ceph-mon[80135]: from='client.26074 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:46 compute-1 ceph-mon[80135]: from='client.16704 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:46 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1580129398' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 23 21:18:46 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3826487999' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 23 21:18:46 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3079711421' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 21:18:46 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/4167770684' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 23 21:18:46 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3642177791' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 23 21:18:46 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1895009701' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 23 21:18:46 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Nov 23 21:18:46 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2726670865' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 23 21:18:46 compute-1 crontab[247065]: (root) LIST (root)
Nov 23 21:18:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:47.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:18:47 compute-1 sudo[247154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:18:47 compute-1 sudo[247154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:18:47 compute-1 sudo[247154]: pam_unix(sudo:session): session closed for user root
Nov 23 21:18:47 compute-1 ceph-mon[80135]: from='client.26780 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:47 compute-1 ceph-mon[80135]: from='client.26095 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:47 compute-1 ceph-mon[80135]: from='client.16725 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:47 compute-1 ceph-mon[80135]: from='client.26795 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:47 compute-1 ceph-mon[80135]: pgmap v1122: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:18:47 compute-1 ceph-mon[80135]: from='client.26107 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:47 compute-1 ceph-mon[80135]: from='client.16746 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:47 compute-1 ceph-mon[80135]: from='client.26816 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:47 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/294278415' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 21:18:47 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2023134629' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 23 21:18:47 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2726670865' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 23 21:18:47 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3526342823' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 23 21:18:47 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3778759182' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 21:18:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Nov 23 21:18:47 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1319081198' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 23 21:18:47 compute-1 nova_compute[230183]: 2025-11-23 21:18:47.984 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:18:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:48.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:48 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Nov 23 21:18:48 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3684318383' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 23 21:18:48 compute-1 nova_compute[230183]: 2025-11-23 21:18:48.331 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:18:48 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Nov 23 21:18:48 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2607410095' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 23 21:18:48 compute-1 ceph-mon[80135]: from='client.26119 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:48 compute-1 ceph-mon[80135]: from='client.16764 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:48 compute-1 ceph-mon[80135]: from='client.26831 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:48 compute-1 ceph-mon[80135]: from='client.26134 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:48 compute-1 ceph-mon[80135]: from='client.16785 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:48 compute-1 ceph-mon[80135]: from='client.26855 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:48 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2992208968' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 23 21:18:48 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3948095813' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 23 21:18:48 compute-1 ceph-mon[80135]: from='client.26149 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:48 compute-1 ceph-mon[80135]: from='client.16809 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:48 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1319081198' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 23 21:18:48 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3147972838' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 23 21:18:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:18:48 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3684318383' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 23 21:18:48 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2181781317' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 23 21:18:48 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Nov 23 21:18:48 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3220501528' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 23 21:18:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:49.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:49 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Nov 23 21:18:49 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1096267481' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 23 21:18:49 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Nov 23 21:18:49 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/210989180' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 23 21:18:49 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Nov 23 21:18:49 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/746869594' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 23 21:18:49 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Nov 23 21:18:49 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/715654558' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 23 21:18:49 compute-1 ceph-mon[80135]: from='client.26873 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:49 compute-1 ceph-mon[80135]: from='client.26161 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:49 compute-1 ceph-mon[80135]: from='client.16836 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:49 compute-1 ceph-mon[80135]: from='client.26894 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:49 compute-1 ceph-mon[80135]: pgmap v1123: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:18:49 compute-1 ceph-mon[80135]: from='client.26173 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:49 compute-1 ceph-mon[80135]: from='client.16863 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:49 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2607410095' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 23 21:18:49 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3220501528' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 23 21:18:49 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1166756432' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 23 21:18:49 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1096267481' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 23 21:18:49 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2988905069' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 23 21:18:49 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/210989180' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 23 21:18:49 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/538972713' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 23 21:18:49 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/746869594' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 23 21:18:49 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1452038472' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 23 21:18:49 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/715654558' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 23 21:18:49 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Nov 23 21:18:49 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3864745210' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 23 21:18:49 compute-1 systemd[1]: Starting Hostname Service...
Nov 23 21:18:50 compute-1 systemd[1]: Started Hostname Service.
Nov 23 21:18:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:50.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:50 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Nov 23 21:18:50 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1508255163' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 23 21:18:50 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Nov 23 21:18:50 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2620366759' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 23 21:18:50 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 23 21:18:50 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3524373913' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 23 21:18:50 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Nov 23 21:18:50 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1660088284' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 23 21:18:50 compute-1 ceph-mon[80135]: from='client.26179 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:50 compute-1 ceph-mon[80135]: from='client.16878 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:50 compute-1 ceph-mon[80135]: from='client.26197 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:50 compute-1 ceph-mon[80135]: from='client.16893 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:50 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3080332708' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 23 21:18:50 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3864745210' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 23 21:18:50 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/4200520148' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 23 21:18:50 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3953901098' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 23 21:18:50 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2483345435' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 23 21:18:50 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1508255163' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 23 21:18:50 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2620366759' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 23 21:18:50 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1699447290' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 23 21:18:50 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/4104383779' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 23 21:18:50 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3349483395' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 23 21:18:50 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3969985076' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 23 21:18:50 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2946597723' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 23 21:18:50 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3524373913' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 23 21:18:50 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1660088284' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 23 21:18:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:51.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:18:51.076 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:18:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:18:51.077 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:18:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:18:51.077 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:18:51 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Nov 23 21:18:51 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3263217182' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 23 21:18:51 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Nov 23 21:18:51 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1068717160' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 23 21:18:51 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Nov 23 21:18:51 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2174909781' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 23 21:18:51 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Nov 23 21:18:51 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/584137550' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 23 21:18:51 compute-1 ceph-mon[80135]: from='client.26209 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:51 compute-1 ceph-mon[80135]: pgmap v1124: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:18:51 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/4095315572' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 23 21:18:51 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2106951192' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 23 21:18:51 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/773834112' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 23 21:18:51 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3275635344' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 23 21:18:51 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3263217182' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 23 21:18:51 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1068717160' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 23 21:18:51 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3131615339' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 23 21:18:51 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/393723153' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 23 21:18:51 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2174909781' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 23 21:18:51 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2339120237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 23 21:18:51 compute-1 ceph-mon[80135]: from='client.27077 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:51 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3724249338' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 23 21:18:51 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/584137550' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 23 21:18:51 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3261763036' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 23 21:18:51 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2545793442' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 23 21:18:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:18:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:52.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:18:52 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Nov 23 21:18:52 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2864661257' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 23 21:18:52 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:18:52 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1438252920' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 23 21:18:52 compute-1 ceph-mon[80135]: from='client.27101 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:52 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/953132378' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 23 21:18:52 compute-1 ceph-mon[80135]: from='client.26296 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:52 compute-1 ceph-mon[80135]: from='client.17043 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:52 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2864661257' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 23 21:18:52 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/400688937' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 23 21:18:52 compute-1 ceph-mon[80135]: pgmap v1125: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:18:52 compute-1 ceph-mon[80135]: from='client.27131 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:52 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3628178677' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 23 21:18:52 compute-1 ceph-mon[80135]: from='client.17067 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:52 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3369750004' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 23 21:18:52 compute-1 ceph-mon[80135]: from='client.26317 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:52 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Nov 23 21:18:52 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3388997346' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 23 21:18:53 compute-1 nova_compute[230183]: 2025-11-23 21:18:53.021 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:18:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:18:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:53.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:18:53 compute-1 nova_compute[230183]: 2025-11-23 21:18:53.333 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:18:53 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0)
Nov 23 21:18:53 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1817915753' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 23 21:18:53 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3388997346' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 23 21:18:53 compute-1 ceph-mon[80135]: from='client.27152 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:53 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/420032919' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 23 21:18:53 compute-1 ceph-mon[80135]: from='client.17091 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:53 compute-1 ceph-mon[80135]: from='client.26329 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:53 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1817915753' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 23 21:18:53 compute-1 ceph-mon[80135]: from='client.26338 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:53 compute-1 ceph-mon[80135]: from='client.27179 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:53 compute-1 ceph-mon[80135]: from='client.17118 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:53 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3840698566' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 23 21:18:53 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Nov 23 21:18:53 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2836468749' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 23 21:18:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:54.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:54 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Nov 23 21:18:54 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1210590168' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 23 21:18:54 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 21:18:54 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 21:18:54 compute-1 ceph-mon[80135]: from='client.27197 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:54 compute-1 ceph-mon[80135]: from='client.26353 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:54 compute-1 ceph-mon[80135]: from='client.17136 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:54 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2836468749' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 23 21:18:54 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1279828983' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 23 21:18:54 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1025556095' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 23 21:18:54 compute-1 ceph-mon[80135]: from='client.26368 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:54 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1210590168' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 23 21:18:54 compute-1 ceph-mon[80135]: from='client.17163 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:54 compute-1 ceph-mon[80135]: from='client.27221 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:54 compute-1 ceph-mon[80135]: pgmap v1126: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:18:54 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/736443909' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 23 21:18:54 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 21:18:54 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 21:18:54 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3859332303' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 23 21:18:54 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 21:18:54 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 21:18:54 compute-1 ceph-mon[80135]: from='client.17196 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:54 compute-1 ceph-mon[80135]: from='client.26383 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:54 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 21:18:54 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 21:18:54 compute-1 ceph-mon[80135]: from='client.27248 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:55.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:55 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Nov 23 21:18:55 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1583926911' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 23 21:18:55 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 21:18:55 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 21:18:55 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3502980523' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 23 21:18:55 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2788619157' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 23 21:18:55 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 21:18:55 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 21:18:55 compute-1 ceph-mon[80135]: from='client.17217 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:55 compute-1 ceph-mon[80135]: from='client.26407 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:55 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1583926911' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 23 21:18:55 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 21:18:55 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 21:18:55 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 21:18:55 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 21:18:55 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1917580326' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 23 21:18:55 compute-1 ceph-mon[80135]: from='client.17250 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:55 compute-1 ceph-mon[80135]: from='client.27302 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:55 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 21:18:55 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 21:18:55 compute-1 ceph-mon[80135]: from='client.26425 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:55 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1696057665' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 23 21:18:55 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 21:18:55 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 21:18:55 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 21:18:55 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 21:18:56 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Nov 23 21:18:56 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1982132384' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 23 21:18:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:18:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:56.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:18:56 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0)
Nov 23 21:18:56 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3369457675' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 23 21:18:56 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 21:18:56 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 21:18:56 compute-1 ceph-mon[80135]: from='client.26446 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:18:56 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1982132384' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 23 21:18:56 compute-1 ceph-mon[80135]: from='client.17283 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:56 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3968254591' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 23 21:18:56 compute-1 ceph-mon[80135]: pgmap v1127: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:18:56 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3369457675' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 23 21:18:56 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/4173198822' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 23 21:18:56 compute-1 ceph-mon[80135]: from='client.26470 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:56 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Nov 23 21:18:56 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1872188886' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 23 21:18:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:18:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:57.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:18:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:18:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Nov 23 21:18:57 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3894689480' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 23 21:18:57 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1872188886' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 23 21:18:57 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/10812519' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 23 21:18:57 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/717847632' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 23 21:18:57 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3894689480' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 23 21:18:57 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1322776160' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 23 21:18:57 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1486092316' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 23 21:18:57 compute-1 ceph-mon[80135]: from='client.27377 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:57 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2328676861' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 23 21:18:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:18:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:58.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:18:58 compute-1 nova_compute[230183]: 2025-11-23 21:18:58.059 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:18:58 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Nov 23 21:18:58 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1628049275' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 23 21:18:58 compute-1 nova_compute[230183]: 2025-11-23 21:18:58.335 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:18:58 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Nov 23 21:18:58 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/247555263' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 23 21:18:58 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1440736949' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 23 21:18:58 compute-1 ceph-mon[80135]: from='client.17337 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:18:58 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1628049275' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 23 21:18:58 compute-1 ceph-mon[80135]: pgmap v1128: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:18:58 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2565151772' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 23 21:18:58 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2288918010' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 23 21:18:58 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/247555263' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 23 21:18:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:18:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:18:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:59.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:18:59 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Nov 23 21:18:59 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3893524027' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 23 21:19:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:00.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:00 compute-1 ceph-mon[80135]: from='client.26500 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:19:00 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1713685342' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 23 21:19:00 compute-1 ceph-mon[80135]: from='client.27410 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:19:00 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/268968875' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 23 21:19:00 compute-1 ceph-mon[80135]: from='client.17364 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:19:00 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3893524027' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 23 21:19:00 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/656785731' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 23 21:19:00 compute-1 sshd-session[249107]: Invalid user btc from 92.118.39.92 port 56554
Nov 23 21:19:00 compute-1 sshd-session[249107]: Connection closed by invalid user btc 92.118.39.92 port 56554 [preauth]
Nov 23 21:19:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:01.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:01 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1664248280' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 23 21:19:01 compute-1 ceph-mon[80135]: from='client.27437 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:19:01 compute-1 ceph-mon[80135]: from='client.17388 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:19:01 compute-1 ceph-mon[80135]: from='client.26518 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:19:01 compute-1 ceph-mon[80135]: pgmap v1129: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:19:01 compute-1 ceph-mon[80135]: from='client.27446 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:19:01 compute-1 ceph-mon[80135]: from='client.27449 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:19:01 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1102969640' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 23 21:19:01 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1880805734' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 23 21:19:01 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Nov 23 21:19:01 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1478691887' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 23 21:19:01 compute-1 ovs-appctl[249574]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 23 21:19:01 compute-1 ovs-appctl[249585]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 23 21:19:01 compute-1 ovs-appctl[249596]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 23 21:19:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:02.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:02 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/827759102' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 23 21:19:02 compute-1 ceph-mon[80135]: from='client.26539 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:19:02 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1478691887' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 23 21:19:02 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/4128640648' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 23 21:19:02 compute-1 ceph-mon[80135]: from='client.26545 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:19:02 compute-1 ceph-mon[80135]: from='client.27470 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:19:02 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2929929060' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 23 21:19:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:19:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Nov 23 21:19:02 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1255782784' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 23 21:19:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Nov 23 21:19:02 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/743998604' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 23 21:19:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:19:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:03.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:19:03 compute-1 nova_compute[230183]: 2025-11-23 21:19:03.059 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:19:03 compute-1 ceph-mon[80135]: from='client.17430 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:19:03 compute-1 ceph-mon[80135]: from='client.27479 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:19:03 compute-1 ceph-mon[80135]: from='client.17439 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:19:03 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3128593780' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 23 21:19:03 compute-1 ceph-mon[80135]: pgmap v1130: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:19:03 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1255782784' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 23 21:19:03 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3714883536' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 23 21:19:03 compute-1 ceph-mon[80135]: from='client.26566 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:19:03 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/743998604' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 23 21:19:03 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/423514191' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 23 21:19:03 compute-1 nova_compute[230183]: 2025-11-23 21:19:03.337 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:19:03 compute-1 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 23 21:19:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:04.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:04 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 23 21:19:04 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3870850034' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 23 21:19:04 compute-1 ceph-mon[80135]: from='client.26578 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:19:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:19:04 compute-1 ceph-mon[80135]: from='client.27509 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:19:04 compute-1 ceph-mon[80135]: from='client.17475 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:19:04 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2236210916' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 23 21:19:04 compute-1 ceph-mon[80135]: from='client.27518 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:19:04 compute-1 ceph-mon[80135]: from='client.17484 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:19:04 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2505412994' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 23 21:19:04 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3870850034' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 23 21:19:04 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2752389414' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 23 21:19:04 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Nov 23 21:19:04 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1747467823' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 23 21:19:04 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Nov 23 21:19:04 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4014633627' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 23 21:19:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:05.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:05 compute-1 ceph-mon[80135]: pgmap v1131: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:19:05 compute-1 ceph-mon[80135]: from='client.26596 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:19:05 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1747467823' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 23 21:19:05 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/577242787' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 23 21:19:05 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/4014633627' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 23 21:19:05 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/123989369' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 23 21:19:05 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Nov 23 21:19:05 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/273507409' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 23 21:19:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:06.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:06 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Nov 23 21:19:06 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1618491881' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 23 21:19:06 compute-1 ceph-mon[80135]: from='client.26605 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:19:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2492804787' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 23 21:19:06 compute-1 ceph-mon[80135]: from='client.27554 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:19:06 compute-1 ceph-mon[80135]: from='client.17523 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:19:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3353732940' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 23 21:19:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/273507409' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 23 21:19:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2252325486' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 23 21:19:06 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Nov 23 21:19:06 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2441055417' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 23 21:19:06 compute-1 podman[251039]: 2025-11-23 21:19:06.643855009 +0000 UTC m=+0.055227431 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 21:19:06 compute-1 podman[251038]: 2025-11-23 21:19:06.678902972 +0000 UTC m=+0.089874024 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 23 21:19:06 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Nov 23 21:19:06 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2856170711' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 23 21:19:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:19:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:07.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:19:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Nov 23 21:19:07 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/784096012' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 23 21:19:07 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/836800842' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 23 21:19:07 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1618491881' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 23 21:19:07 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2441055417' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 23 21:19:07 compute-1 ceph-mon[80135]: pgmap v1132: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:19:07 compute-1 ceph-mon[80135]: from='client.26635 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:19:07 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/979741711' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 23 21:19:07 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2856170711' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 23 21:19:07 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3481988654' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 23 21:19:07 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1190473260' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 23 21:19:07 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/784096012' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 23 21:19:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:19:07 compute-1 nova_compute[230183]: 2025-11-23 21:19:07.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:19:07 compute-1 nova_compute[230183]: 2025-11-23 21:19:07.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:19:07 compute-1 nova_compute[230183]: 2025-11-23 21:19:07.428 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:19:07 compute-1 nova_compute[230183]: 2025-11-23 21:19:07.456 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:19:07 compute-1 nova_compute[230183]: 2025-11-23 21:19:07.456 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:19:07 compute-1 nova_compute[230183]: 2025-11-23 21:19:07.456 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:19:07 compute-1 nova_compute[230183]: 2025-11-23 21:19:07.456 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:19:07 compute-1 nova_compute[230183]: 2025-11-23 21:19:07.457 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:19:07 compute-1 sudo[251165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:19:07 compute-1 sudo[251165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:19:07 compute-1 sudo[251165]: pam_unix(sudo:session): session closed for user root
Nov 23 21:19:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:19:07 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2051085304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:19:07 compute-1 nova_compute[230183]: 2025-11-23 21:19:07.908 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:19:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:19:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:08.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:19:08 compute-1 nova_compute[230183]: 2025-11-23 21:19:08.057 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:19:08 compute-1 nova_compute[230183]: 2025-11-23 21:19:08.058 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4728MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:19:08 compute-1 nova_compute[230183]: 2025-11-23 21:19:08.059 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:19:08 compute-1 nova_compute[230183]: 2025-11-23 21:19:08.059 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:19:08 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Nov 23 21:19:08 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/102875721' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 23 21:19:08 compute-1 nova_compute[230183]: 2025-11-23 21:19:08.109 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:19:08 compute-1 nova_compute[230183]: 2025-11-23 21:19:08.340 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:19:08 compute-1 nova_compute[230183]: 2025-11-23 21:19:08.386 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:19:08 compute-1 nova_compute[230183]: 2025-11-23 21:19:08.387 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:19:08 compute-1 ceph-mon[80135]: from='client.17562 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:19:08 compute-1 ceph-mon[80135]: from='client.27596 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:19:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1759951106' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 23 21:19:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2051085304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:19:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/888662315' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 23 21:19:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/2717766197' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:19:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/2717766197' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:19:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3235159436' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 23 21:19:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/102875721' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 23 21:19:08 compute-1 nova_compute[230183]: 2025-11-23 21:19:08.414 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:19:08 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Nov 23 21:19:08 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1363249802' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 23 21:19:08 compute-1 podman[251282]: 2025-11-23 21:19:08.768573148 +0000 UTC m=+0.066869412 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 23 21:19:08 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:19:08 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2917113464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:19:08 compute-1 nova_compute[230183]: 2025-11-23 21:19:08.871 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:19:08 compute-1 nova_compute[230183]: 2025-11-23 21:19:08.878 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:19:08 compute-1 nova_compute[230183]: 2025-11-23 21:19:08.890 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:19:08 compute-1 nova_compute[230183]: 2025-11-23 21:19:08.892 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 23 21:19:08 compute-1 nova_compute[230183]: 2025-11-23 21:19:08.892 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:19:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:09.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:09 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Nov 23 21:19:09 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1353647905' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 23 21:19:09 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Nov 23 21:19:09 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2424626875' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 23 21:19:09 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1827396603' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 23 21:19:09 compute-1 ceph-mon[80135]: pgmap v1133: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:19:09 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1956868693' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 23 21:19:09 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1363249802' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 23 21:19:09 compute-1 ceph-mon[80135]: from='client.17610 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:19:09 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2917113464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:19:09 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1353647905' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 23 21:19:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:10.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:10 compute-1 sshd-session[251062]: Invalid user carlos from 80.94.95.116 port 40482
Nov 23 21:19:10 compute-1 sshd-session[251062]: Connection closed by invalid user carlos 80.94.95.116 port 40482 [preauth]
Nov 23 21:19:10 compute-1 ceph-mon[80135]: from='client.26677 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:19:10 compute-1 ceph-mon[80135]: from='client.27656 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:19:10 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2424626875' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 23 21:19:10 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1036166281' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 23 21:19:10 compute-1 ceph-mon[80135]: from='client.17634 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:19:10 compute-1 ceph-mon[80135]: from='client.27674 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:19:10 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/556149434' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 23 21:19:10 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1688899299' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 23 21:19:10 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Nov 23 21:19:10 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1708152719' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 23 21:19:10 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Nov 23 21:19:10 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1026958516' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 23 21:19:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:11.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:11 compute-1 ceph-mon[80135]: from='client.17646 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:19:11 compute-1 ceph-mon[80135]: from='client.27686 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:19:11 compute-1 ceph-mon[80135]: from='client.26695 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:19:11 compute-1 ceph-mon[80135]: pgmap v1134: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:19:11 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1708152719' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 23 21:19:11 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/862591127' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 23 21:19:11 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2044575977' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 23 21:19:11 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1026958516' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 23 21:19:11 compute-1 nova_compute[230183]: 2025-11-23 21:19:11.891 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:19:11 compute-1 nova_compute[230183]: 2025-11-23 21:19:11.891 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:19:11 compute-1 nova_compute[230183]: 2025-11-23 21:19:11.892 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:19:11 compute-1 nova_compute[230183]: 2025-11-23 21:19:11.892 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:19:11 compute-1 nova_compute[230183]: 2025-11-23 21:19:11.892 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 23 21:19:11 compute-1 virtqemud[229705]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 23 21:19:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:12.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Nov 23 21:19:12 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3374333239' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 23 21:19:12 compute-1 systemd[1]: Starting Time & Date Service...
Nov 23 21:19:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:19:12 compute-1 nova_compute[230183]: 2025-11-23 21:19:12.423 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:19:12 compute-1 nova_compute[230183]: 2025-11-23 21:19:12.425 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:19:12 compute-1 nova_compute[230183]: 2025-11-23 21:19:12.426 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 23 21:19:12 compute-1 nova_compute[230183]: 2025-11-23 21:19:12.426 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 23 21:19:12 compute-1 nova_compute[230183]: 2025-11-23 21:19:12.440 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 23 21:19:12 compute-1 systemd[1]: Started Time & Date Service.
Nov 23 21:19:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Nov 23 21:19:12 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1773746850' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 23 21:19:12 compute-1 ceph-mon[80135]: from='client.17670 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:19:12 compute-1 ceph-mon[80135]: from='client.26707 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:19:12 compute-1 ceph-mon[80135]: from='client.27710 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:19:12 compute-1 ceph-mon[80135]: from='client.17682 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:19:12 compute-1 ceph-mon[80135]: from='client.26713 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:19:12 compute-1 ceph-mon[80135]: from='client.27719 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:19:12 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3164709178' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 23 21:19:12 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2763860580' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 23 21:19:12 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3374333239' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 23 21:19:12 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/307253547' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 23 21:19:12 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2698622030' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 23 21:19:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:13.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:13 compute-1 nova_compute[230183]: 2025-11-23 21:19:13.110 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:19:13 compute-1 nova_compute[230183]: 2025-11-23 21:19:13.341 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:19:13 compute-1 ceph-mon[80135]: pgmap v1135: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:19:13 compute-1 ceph-mon[80135]: from='client.17718 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:19:13 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1773746850' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 23 21:19:13 compute-1 ceph-mon[80135]: from='client.17721 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:19:13 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/206554228' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 23 21:19:13 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Nov 23 21:19:13 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/739475637' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 23 21:19:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:19:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:14.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:19:14 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Nov 23 21:19:14 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2520495412' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 23 21:19:14 compute-1 ceph-mon[80135]: from='client.27752 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:19:14 compute-1 ceph-mon[80135]: from='client.17733 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:19:14 compute-1 ceph-mon[80135]: from='client.26737 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:19:14 compute-1 ceph-mon[80135]: from='client.27770 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:19:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3600430993' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 23 21:19:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/739475637' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 23 21:19:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1650076849' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 23 21:19:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3257865756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:19:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3652618139' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 23 21:19:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2520495412' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 23 21:19:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:19:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:15.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:19:15 compute-1 ceph-mon[80135]: pgmap v1136: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:19:15 compute-1 ceph-mon[80135]: from='client.26767 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:19:15 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2289863147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:19:15 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3398821895' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 23 21:19:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:16.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:16 compute-1 ceph-mon[80135]: from='client.26779 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:19:16 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/4268883983' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 23 21:19:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:19:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:17.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:19:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:19:17 compute-1 ceph-mon[80135]: pgmap v1137: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:19:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:18.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:18 compute-1 nova_compute[230183]: 2025-11-23 21:19:18.149 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:19:18 compute-1 nova_compute[230183]: 2025-11-23 21:19:18.344 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:19:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:19:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:19:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:19.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:19:19 compute-1 ceph-mon[80135]: pgmap v1138: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:19:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:19:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:20.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:19:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:19:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:21.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:19:21 compute-1 ceph-mon[80135]: pgmap v1139: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:19:21 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/4166569880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:19:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:22.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:22 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:19:22 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3744182617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:19:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:23.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:23 compute-1 nova_compute[230183]: 2025-11-23 21:19:23.151 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:19:23 compute-1 nova_compute[230183]: 2025-11-23 21:19:23.346 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:19:23 compute-1 ceph-mon[80135]: pgmap v1140: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:19:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:19:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:24.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:19:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:25.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:26 compute-1 ceph-mon[80135]: pgmap v1141: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:19:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:26.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:27 compute-1 ceph-mon[80135]: pgmap v1142: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:19:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:27.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:27 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:19:27 compute-1 sudo[252098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:19:27 compute-1 sudo[252098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:19:27 compute-1 sudo[252098]: pam_unix(sudo:session): session closed for user root
Nov 23 21:19:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:28.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:28 compute-1 nova_compute[230183]: 2025-11-23 21:19:28.151 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:19:28 compute-1 nova_compute[230183]: 2025-11-23 21:19:28.349 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:19:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:29.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:29 compute-1 ceph-mon[80135]: pgmap v1143: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:19:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:19:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:30.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:19:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:31.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:31 compute-1 ceph-mon[80135]: pgmap v1144: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:19:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:32.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:32 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:19:32 compute-1 ceph-mon[80135]: pgmap v1145: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:19:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:33.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:33 compute-1 nova_compute[230183]: 2025-11-23 21:19:33.205 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:19:33 compute-1 nova_compute[230183]: 2025-11-23 21:19:33.350 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:19:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:19:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:34.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:34 compute-1 ceph-mon[80135]: pgmap v1146: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:19:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:35.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:19:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:36.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:19:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:19:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:37.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:19:37 compute-1 ceph-mon[80135]: pgmap v1147: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:19:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:19:37 compute-1 sudo[252127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:19:37 compute-1 sudo[252127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:19:37 compute-1 sudo[252127]: pam_unix(sudo:session): session closed for user root
Nov 23 21:19:37 compute-1 podman[252153]: 2025-11-23 21:19:37.796109219 +0000 UTC m=+0.051717478 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 23 21:19:37 compute-1 sudo[252165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 21:19:37 compute-1 sudo[252165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:19:37 compute-1 podman[252151]: 2025-11-23 21:19:37.854066902 +0000 UTC m=+0.109230010 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 23 21:19:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:38.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:38 compute-1 nova_compute[230183]: 2025-11-23 21:19:38.205 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:19:38 compute-1 sudo[252165]: pam_unix(sudo:session): session closed for user root
Nov 23 21:19:38 compute-1 nova_compute[230183]: 2025-11-23 21:19:38.352 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:19:38 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:19:38 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 21:19:38 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:19:38 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:19:38 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 21:19:38 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 21:19:38 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:19:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:39.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:39 compute-1 ceph-mon[80135]: pgmap v1148: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 23 21:19:39 compute-1 podman[252250]: 2025-11-23 21:19:39.638713085 +0000 UTC m=+0.053471405 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Nov 23 21:19:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:40.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.002000052s ======
Nov 23 21:19:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:41.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Nov 23 21:19:41 compute-1 ceph-mon[80135]: pgmap v1149: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:19:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:42.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:19:42 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 23 21:19:42 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 23 21:19:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:43.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:43 compute-1 nova_compute[230183]: 2025-11-23 21:19:43.207 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:19:43 compute-1 nova_compute[230183]: 2025-11-23 21:19:43.354 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:19:43 compute-1 sudo[252276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 21:19:43 compute-1 sudo[252276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:19:43 compute-1 sudo[252276]: pam_unix(sudo:session): session closed for user root
Nov 23 21:19:43 compute-1 ceph-mon[80135]: pgmap v1150: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 23 21:19:43 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:19:43 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:19:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:19:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:44.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:19:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:19:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:45.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:19:45 compute-1 ceph-mon[80135]: pgmap v1151: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:19:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:46.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:47.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:19:47 compute-1 ceph-mon[80135]: pgmap v1152: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 23 21:19:47 compute-1 sudo[252304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:19:47 compute-1 sudo[252304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:19:47 compute-1 sudo[252304]: pam_unix(sudo:session): session closed for user root
Nov 23 21:19:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:48.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:48 compute-1 nova_compute[230183]: 2025-11-23 21:19:48.209 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:19:48 compute-1 nova_compute[230183]: 2025-11-23 21:19:48.355 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:19:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:19:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:19:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:49.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:19:49 compute-1 ceph-mon[80135]: pgmap v1153: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 23 21:19:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:50.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:19:51.077 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:19:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:19:51.078 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:19:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:19:51.078 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:19:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:19:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:51.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:19:51 compute-1 ceph-mon[80135]: pgmap v1154: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:19:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:52.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:52 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:19:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:19:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:53.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:19:53 compute-1 nova_compute[230183]: 2025-11-23 21:19:53.212 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:19:53 compute-1 nova_compute[230183]: 2025-11-23 21:19:53.356 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:19:53 compute-1 ceph-mon[80135]: pgmap v1155: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:19:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:54.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:54 compute-1 ceph-mon[80135]: pgmap v1156: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:19:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:55.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:56.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:19:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:57.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:19:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:19:57 compute-1 ceph-mon[80135]: pgmap v1157: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:19:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:58.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:58 compute-1 nova_compute[230183]: 2025-11-23 21:19:58.212 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:19:58 compute-1 nova_compute[230183]: 2025-11-23 21:19:58.357 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:19:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:19:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:19:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:59.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:19:59 compute-1 ceph-mon[80135]: pgmap v1158: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:20:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:20:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:00.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:20:00 compute-1 ceph-mon[80135]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Nov 23 21:20:00 compute-1 ceph-mon[80135]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Nov 23 21:20:00 compute-1 ceph-mon[80135]:     daemon nfs.cephfs.0.0.compute-1.fuxuha on compute-1 is in error state
Nov 23 21:20:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:01.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:01 compute-1 ceph-mon[80135]: pgmap v1159: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:20:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:02.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:20:03 compute-1 ceph-mon[80135]: pgmap v1160: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:20:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:03.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:03 compute-1 nova_compute[230183]: 2025-11-23 21:20:03.267 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:20:03 compute-1 nova_compute[230183]: 2025-11-23 21:20:03.358 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:20:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:20:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:04.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:04 compute-1 sudo[244905]: pam_unix(sudo:session): session closed for user root
Nov 23 21:20:04 compute-1 sshd-session[244904]: Received disconnect from 192.168.122.10 port 44226:11: disconnected by user
Nov 23 21:20:04 compute-1 sshd-session[244904]: Disconnected from user zuul 192.168.122.10 port 44226
Nov 23 21:20:04 compute-1 sshd-session[244901]: pam_unix(sshd:session): session closed for user zuul
Nov 23 21:20:04 compute-1 systemd[1]: session-55.scope: Deactivated successfully.
Nov 23 21:20:04 compute-1 systemd[1]: session-55.scope: Consumed 2min 51.602s CPU time, 751.3M memory peak, read 284.1M from disk, written 65.2M to disk.
Nov 23 21:20:04 compute-1 systemd-logind[793]: Session 55 logged out. Waiting for processes to exit.
Nov 23 21:20:04 compute-1 systemd-logind[793]: Removed session 55.
Nov 23 21:20:05 compute-1 sshd-session[252338]: Accepted publickey for zuul from 192.168.122.10 port 41636 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 21:20:05 compute-1 systemd-logind[793]: New session 56 of user zuul.
Nov 23 21:20:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:20:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:05.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:20:05 compute-1 systemd[1]: Started Session 56 of User zuul.
Nov 23 21:20:05 compute-1 sshd-session[252338]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 21:20:05 compute-1 ceph-mon[80135]: pgmap v1161: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:20:05 compute-1 sudo[252342]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-1-2025-11-23-gsnipqx.tar.xz
Nov 23 21:20:05 compute-1 sudo[252342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:20:05 compute-1 sudo[252342]: pam_unix(sudo:session): session closed for user root
Nov 23 21:20:05 compute-1 sshd-session[252341]: Received disconnect from 192.168.122.10 port 41636:11: disconnected by user
Nov 23 21:20:05 compute-1 sshd-session[252341]: Disconnected from user zuul 192.168.122.10 port 41636
Nov 23 21:20:05 compute-1 sshd-session[252338]: pam_unix(sshd:session): session closed for user zuul
Nov 23 21:20:05 compute-1 systemd[1]: session-56.scope: Deactivated successfully.
Nov 23 21:20:05 compute-1 systemd-logind[793]: Session 56 logged out. Waiting for processes to exit.
Nov 23 21:20:05 compute-1 systemd-logind[793]: Removed session 56.
Nov 23 21:20:05 compute-1 sshd-session[252367]: Accepted publickey for zuul from 192.168.122.10 port 41650 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 21:20:05 compute-1 systemd-logind[793]: New session 57 of user zuul.
Nov 23 21:20:05 compute-1 systemd[1]: Started Session 57 of User zuul.
Nov 23 21:20:05 compute-1 sshd-session[252367]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 21:20:05 compute-1 sudo[252371]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Nov 23 21:20:05 compute-1 sudo[252371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:20:05 compute-1 sudo[252371]: pam_unix(sudo:session): session closed for user root
Nov 23 21:20:05 compute-1 sshd-session[252370]: Received disconnect from 192.168.122.10 port 41650:11: disconnected by user
Nov 23 21:20:05 compute-1 sshd-session[252370]: Disconnected from user zuul 192.168.122.10 port 41650
Nov 23 21:20:05 compute-1 sshd-session[252367]: pam_unix(sshd:session): session closed for user zuul
Nov 23 21:20:05 compute-1 systemd[1]: session-57.scope: Deactivated successfully.
Nov 23 21:20:05 compute-1 systemd-logind[793]: Session 57 logged out. Waiting for processes to exit.
Nov 23 21:20:05 compute-1 systemd-logind[793]: Removed session 57.
Nov 23 21:20:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:06.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:06 compute-1 ceph-mon[80135]: pgmap v1162: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:20:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:20:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:07.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:20:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:20:07 compute-1 nova_compute[230183]: 2025-11-23 21:20:07.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:20:07 compute-1 nova_compute[230183]: 2025-11-23 21:20:07.428 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:20:07 compute-1 nova_compute[230183]: 2025-11-23 21:20:07.449 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:20:07 compute-1 nova_compute[230183]: 2025-11-23 21:20:07.450 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:20:07 compute-1 nova_compute[230183]: 2025-11-23 21:20:07.450 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:20:07 compute-1 nova_compute[230183]: 2025-11-23 21:20:07.450 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:20:07 compute-1 nova_compute[230183]: 2025-11-23 21:20:07.450 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:20:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:20:07 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/280190509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:20:07 compute-1 nova_compute[230183]: 2025-11-23 21:20:07.871 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:20:07 compute-1 sudo[252420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:20:07 compute-1 sudo[252420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:20:07 compute-1 sudo[252420]: pam_unix(sudo:session): session closed for user root
Nov 23 21:20:07 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/280190509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:20:07 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/2272389192' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:20:07 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/2272389192' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:20:08 compute-1 podman[252445]: 2025-11-23 21:20:08.041037948 +0000 UTC m=+0.057511333 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 23 21:20:08 compute-1 nova_compute[230183]: 2025-11-23 21:20:08.059 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:20:08 compute-1 nova_compute[230183]: 2025-11-23 21:20:08.060 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4833MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:20:08 compute-1 nova_compute[230183]: 2025-11-23 21:20:08.061 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:20:08 compute-1 nova_compute[230183]: 2025-11-23 21:20:08.061 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:20:08 compute-1 podman[252444]: 2025-11-23 21:20:08.070575774 +0000 UTC m=+0.089151395 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 21:20:08 compute-1 nova_compute[230183]: 2025-11-23 21:20:08.129 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:20:08 compute-1 nova_compute[230183]: 2025-11-23 21:20:08.130 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:20:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:08.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:08 compute-1 nova_compute[230183]: 2025-11-23 21:20:08.155 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:20:08 compute-1 nova_compute[230183]: 2025-11-23 21:20:08.266 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:20:08 compute-1 nova_compute[230183]: 2025-11-23 21:20:08.359 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:20:08 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:20:08 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/515802534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:20:08 compute-1 nova_compute[230183]: 2025-11-23 21:20:08.585 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:20:08 compute-1 nova_compute[230183]: 2025-11-23 21:20:08.590 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:20:08 compute-1 nova_compute[230183]: 2025-11-23 21:20:08.612 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:20:08 compute-1 nova_compute[230183]: 2025-11-23 21:20:08.614 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 23 21:20:08 compute-1 nova_compute[230183]: 2025-11-23 21:20:08.614 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:20:09 compute-1 ceph-mon[80135]: pgmap v1163: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:20:09 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/515802534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:20:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:20:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:09.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:20:09 compute-1 nova_compute[230183]: 2025-11-23 21:20:09.614 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:20:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:10.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:10 compute-1 podman[252513]: 2025-11-23 21:20:10.631344775 +0000 UTC m=+0.047523296 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 23 21:20:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:11.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:11 compute-1 ceph-mon[80135]: pgmap v1164: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:20:11 compute-1 nova_compute[230183]: 2025-11-23 21:20:11.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:20:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:12.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:20:12 compute-1 nova_compute[230183]: 2025-11-23 21:20:12.428 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:20:12 compute-1 nova_compute[230183]: 2025-11-23 21:20:12.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 23 21:20:12 compute-1 nova_compute[230183]: 2025-11-23 21:20:12.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 23 21:20:12 compute-1 nova_compute[230183]: 2025-11-23 21:20:12.441 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 23 21:20:12 compute-1 nova_compute[230183]: 2025-11-23 21:20:12.441 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:20:12 compute-1 nova_compute[230183]: 2025-11-23 21:20:12.441 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:20:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:13.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:13 compute-1 nova_compute[230183]: 2025-11-23 21:20:13.306 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:20:13 compute-1 nova_compute[230183]: 2025-11-23 21:20:13.361 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:20:13 compute-1 ceph-mon[80135]: pgmap v1165: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:20:13 compute-1 nova_compute[230183]: 2025-11-23 21:20:13.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:20:13 compute-1 nova_compute[230183]: 2025-11-23 21:20:13.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:20:13 compute-1 nova_compute[230183]: 2025-11-23 21:20:13.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 23 21:20:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:14.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:20:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:15.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:20:15 compute-1 ceph-mon[80135]: pgmap v1166: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:20:15 compute-1 nova_compute[230183]: 2025-11-23 21:20:15.422 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:20:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:16.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:16 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3456409841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:20:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:17.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:20:17 compute-1 ceph-mon[80135]: pgmap v1167: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:20:17 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/761156875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:20:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:18.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:18 compute-1 nova_compute[230183]: 2025-11-23 21:20:18.308 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:20:18 compute-1 nova_compute[230183]: 2025-11-23 21:20:18.363 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:20:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:20:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:19.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:19 compute-1 ceph-mon[80135]: pgmap v1168: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:20:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:20.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:20:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:21.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:20:21 compute-1 ceph-mon[80135]: pgmap v1169: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:20:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:22.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:22 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:20:22 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/794938743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:20:22 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2626062665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:20:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:23.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:23 compute-1 nova_compute[230183]: 2025-11-23 21:20:23.311 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:20:23 compute-1 nova_compute[230183]: 2025-11-23 21:20:23.364 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:20:23 compute-1 ceph-mon[80135]: pgmap v1170: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:20:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:24.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:25.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:25 compute-1 ceph-mon[80135]: pgmap v1171: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:20:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:20:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:26.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:20:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:27.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:27 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:20:27 compute-1 ceph-mon[80135]: pgmap v1172: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:20:28 compute-1 sudo[252543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:20:28 compute-1 sudo[252543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:20:28 compute-1 sudo[252543]: pam_unix(sudo:session): session closed for user root
Nov 23 21:20:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:20:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:28.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:20:28 compute-1 nova_compute[230183]: 2025-11-23 21:20:28.313 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:20:28 compute-1 nova_compute[230183]: 2025-11-23 21:20:28.365 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:20:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:20:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:29.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:20:29 compute-1 ceph-mon[80135]: pgmap v1173: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:20:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:30.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:20:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:31.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:20:31 compute-1 ceph-mon[80135]: pgmap v1174: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:20:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:32.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:32 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:20:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:33.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:33 compute-1 nova_compute[230183]: 2025-11-23 21:20:33.358 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:20:33 compute-1 nova_compute[230183]: 2025-11-23 21:20:33.367 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:20:33 compute-1 ceph-mon[80135]: pgmap v1175: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:20:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:20:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:34.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:35 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 21:20:35 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 13K writes, 50K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 3842 syncs, 3.51 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2001 writes, 6920 keys, 2001 commit groups, 1.0 writes per commit group, ingest: 6.50 MB, 0.01 MB/s
                                           Interval WAL: 2001 writes, 838 syncs, 2.39 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 21:20:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:20:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:35.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:20:35 compute-1 ceph-mon[80135]: pgmap v1176: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:20:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:36.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:20:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:37.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:37 compute-1 ceph-mon[80135]: pgmap v1177: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:20:38 compute-1 nova_compute[230183]: 2025-11-23 21:20:38.540 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:20:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:38.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:38 compute-1 podman[252574]: 2025-11-23 21:20:38.640989937 +0000 UTC m=+0.052235623 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 21:20:38 compute-1 podman[252573]: 2025-11-23 21:20:38.66404089 +0000 UTC m=+0.082985370 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 21:20:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:39.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:39 compute-1 ceph-mon[80135]: pgmap v1178: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:20:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:20:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:40.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:20:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:20:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:41.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:20:41 compute-1 ceph-mon[80135]: pgmap v1179: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:20:41 compute-1 podman[252619]: 2025-11-23 21:20:41.668384133 +0000 UTC m=+0.070033146 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd)
Nov 23 21:20:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:20:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:42.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:43 compute-1 nova_compute[230183]: 2025-11-23 21:20:43.362 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:20:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:43.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:43 compute-1 nova_compute[230183]: 2025-11-23 21:20:43.543 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:20:43 compute-1 ceph-mon[80135]: pgmap v1180: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:20:43 compute-1 sudo[252641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:20:43 compute-1 sudo[252641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:20:43 compute-1 sudo[252641]: pam_unix(sudo:session): session closed for user root
Nov 23 21:20:43 compute-1 sudo[252666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Nov 23 21:20:43 compute-1 sudo[252666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:20:44 compute-1 sudo[252666]: pam_unix(sudo:session): session closed for user root
Nov 23 21:20:44 compute-1 sudo[252712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:20:44 compute-1 sudo[252712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:20:44 compute-1 sudo[252712]: pam_unix(sudo:session): session closed for user root
Nov 23 21:20:44 compute-1 sudo[252737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 21:20:44 compute-1 sudo[252737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:20:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:44.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:44 compute-1 sudo[252737]: pam_unix(sudo:session): session closed for user root
Nov 23 21:20:45 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:20:45 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:20:45 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:20:45 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:20:45 compute-1 ceph-mon[80135]: pgmap v1181: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:20:45 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:20:45 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 21:20:45 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:20:45 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:20:45 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 21:20:45 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 21:20:45 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:20:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:45.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.751805) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932845751972, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2635, "num_deletes": 508, "total_data_size": 5368939, "memory_usage": 5451584, "flush_reason": "Manual Compaction"}
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932845783845, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 3492192, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33739, "largest_seqno": 36369, "table_properties": {"data_size": 3481003, "index_size": 6403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3717, "raw_key_size": 30131, "raw_average_key_size": 20, "raw_value_size": 3455012, "raw_average_value_size": 2350, "num_data_blocks": 274, "num_entries": 1470, "num_filter_entries": 1470, "num_deletions": 508, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763932684, "oldest_key_time": 1763932684, "file_creation_time": 1763932845, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 32108 microseconds, and 14448 cpu microseconds.
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.783933) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 3492192 bytes OK
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.783952) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.785917) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.785932) EVENT_LOG_v1 {"time_micros": 1763932845785927, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.785950) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 5355374, prev total WAL file size 5355374, number of live WAL files 2.
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.787642) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323530' seq:72057594037927935, type:22 .. '6B7600353033' seq:0, type:0; will stop at (end)
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(3410KB)], [63(13MB)]
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932845787693, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 18098452, "oldest_snapshot_seqno": -1}
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 6671 keys, 16620064 bytes, temperature: kUnknown
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932845957957, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 16620064, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16573642, "index_size": 28646, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 174009, "raw_average_key_size": 26, "raw_value_size": 16451780, "raw_average_value_size": 2466, "num_data_blocks": 1141, "num_entries": 6671, "num_filter_entries": 6671, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763932845, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.958197) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 16620064 bytes
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.959515) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 106.3 rd, 97.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 13.9 +0.0 blob) out(15.9 +0.0 blob), read-write-amplify(9.9) write-amplify(4.8) OK, records in: 7704, records dropped: 1033 output_compression: NoCompression
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.959542) EVENT_LOG_v1 {"time_micros": 1763932845959531, "job": 38, "event": "compaction_finished", "compaction_time_micros": 170335, "compaction_time_cpu_micros": 58410, "output_level": 6, "num_output_files": 1, "total_output_size": 16620064, "num_input_records": 7704, "num_output_records": 6671, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932845960400, "job": 38, "event": "table_file_deletion", "file_number": 65}
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932845964346, "job": 38, "event": "table_file_deletion", "file_number": 63}
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.787527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.964413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.964418) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.964419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.964421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:20:45 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.964423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:20:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:46.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:46 compute-1 ceph-mon[80135]: pgmap v1182: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 21:20:46 compute-1 ceph-mon[80135]: pgmap v1183: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Nov 23 21:20:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:20:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:20:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:47.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:20:48 compute-1 sudo[252793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:20:48 compute-1 sudo[252793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:20:48 compute-1 sudo[252793]: pam_unix(sudo:session): session closed for user root
Nov 23 21:20:48 compute-1 nova_compute[230183]: 2025-11-23 21:20:48.364 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:20:48 compute-1 nova_compute[230183]: 2025-11-23 21:20:48.545 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:20:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:20:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:48.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:20:48 compute-1 ceph-mon[80135]: pgmap v1184: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Nov 23 21:20:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:20:49 compute-1 sshd[168248]: Timeout before authentication for connection from 68.71.242.113 to 38.102.83.106, pid = 246837
Nov 23 21:20:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:49.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:49 compute-1 sudo[252819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 21:20:49 compute-1 sudo[252819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:20:49 compute-1 sudo[252819]: pam_unix(sudo:session): session closed for user root
Nov 23 21:20:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:50.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:50 compute-1 ceph-mon[80135]: pgmap v1185: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 715 B/s rd, 0 op/s
Nov 23 21:20:50 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:20:50 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:20:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:20:51.078 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:20:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:20:51.079 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:20:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:20:51.079 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:20:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:51.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:52 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:20:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:52.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:52 compute-1 ceph-mon[80135]: pgmap v1186: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Nov 23 21:20:53 compute-1 nova_compute[230183]: 2025-11-23 21:20:53.366 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:20:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:53.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:53 compute-1 nova_compute[230183]: 2025-11-23 21:20:53.546 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:20:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:54.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:54 compute-1 ceph-mon[80135]: pgmap v1187: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 715 B/s rd, 0 op/s
Nov 23 21:20:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:55.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:56.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:56 compute-1 ceph-mon[80135]: pgmap v1188: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 21:20:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:20:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:57.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:58 compute-1 nova_compute[230183]: 2025-11-23 21:20:58.370 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:20:58 compute-1 nova_compute[230183]: 2025-11-23 21:20:58.546 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:20:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:58.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:20:58 compute-1 ceph-mon[80135]: pgmap v1189: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:20:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:20:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:20:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:59.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:00.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:00 compute-1 ceph-mon[80135]: pgmap v1190: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:21:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:01.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:21:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:02.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:02 compute-1 ceph-mon[80135]: pgmap v1191: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:21:03 compute-1 nova_compute[230183]: 2025-11-23 21:21:03.386 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:21:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:03.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:03 compute-1 nova_compute[230183]: 2025-11-23 21:21:03.547 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:21:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:21:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:04.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:04 compute-1 ceph-mon[80135]: pgmap v1192: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:21:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:05.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:06 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 21:21:06 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 7004 writes, 36K keys, 7004 commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.04 MB/s
                                           Cumulative WAL: 7004 writes, 7004 syncs, 1.00 writes per sync, written: 0.09 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1580 writes, 8388 keys, 1580 commit groups, 1.0 writes per commit group, ingest: 17.94 MB, 0.03 MB/s
                                           Interval WAL: 1580 writes, 1580 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     64.9      0.85              0.15        19    0.045       0      0       0.0       0.0
                                             L6      1/0   15.85 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.4     89.6     77.0      3.15              0.70        18    0.175    101K    10K       0.0       0.0
                                            Sum      1/0   15.85 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.4     70.5     74.4      4.00              0.85        37    0.108    101K    10K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.9     85.0     88.5      0.99              0.30        10    0.099     34K   3616       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0     89.6     77.0      3.15              0.70        18    0.175    101K    10K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     65.1      0.85              0.15        18    0.047       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.054, interval 0.015
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.29 GB write, 0.12 MB/s write, 0.28 GB read, 0.12 MB/s read, 4.0 seconds
                                           Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.08 GB read, 0.14 MB/s read, 1.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560649e57350#2 capacity: 304.00 MB usage: 24.39 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.00019 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1490,23.59 MB,7.75977%) FilterBlock(37,298.73 KB,0.0959647%) IndexBlock(37,520.92 KB,0.16734%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 23 21:21:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:06.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:06 compute-1 ceph-mon[80135]: pgmap v1193: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:21:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:21:07 compute-1 nova_compute[230183]: 2025-11-23 21:21:07.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:21:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:07.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:08 compute-1 sudo[252853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:21:08 compute-1 sudo[252853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:21:08 compute-1 sudo[252853]: pam_unix(sudo:session): session closed for user root
Nov 23 21:21:08 compute-1 nova_compute[230183]: 2025-11-23 21:21:08.386 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:21:08 compute-1 nova_compute[230183]: 2025-11-23 21:21:08.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:21:08 compute-1 nova_compute[230183]: 2025-11-23 21:21:08.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:21:08 compute-1 nova_compute[230183]: 2025-11-23 21:21:08.447 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:21:08 compute-1 nova_compute[230183]: 2025-11-23 21:21:08.447 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:21:08 compute-1 nova_compute[230183]: 2025-11-23 21:21:08.447 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:21:08 compute-1 nova_compute[230183]: 2025-11-23 21:21:08.448 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:21:08 compute-1 nova_compute[230183]: 2025-11-23 21:21:08.448 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:21:08 compute-1 nova_compute[230183]: 2025-11-23 21:21:08.549 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:21:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:08.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:08 compute-1 ceph-mon[80135]: pgmap v1194: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:21:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/1693615462' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:21:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/1693615462' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:21:08 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:21:08 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/18675691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:21:08 compute-1 nova_compute[230183]: 2025-11-23 21:21:08.880 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:21:09 compute-1 nova_compute[230183]: 2025-11-23 21:21:09.402 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:21:09 compute-1 nova_compute[230183]: 2025-11-23 21:21:09.403 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4842MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:21:09 compute-1 nova_compute[230183]: 2025-11-23 21:21:09.403 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:21:09 compute-1 nova_compute[230183]: 2025-11-23 21:21:09.403 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:21:09 compute-1 nova_compute[230183]: 2025-11-23 21:21:09.465 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:21:09 compute-1 nova_compute[230183]: 2025-11-23 21:21:09.465 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:21:09 compute-1 nova_compute[230183]: 2025-11-23 21:21:09.493 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:21:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:09.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:09 compute-1 podman[252902]: 2025-11-23 21:21:09.645611278 +0000 UTC m=+0.060859323 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 21:21:09 compute-1 podman[252901]: 2025-11-23 21:21:09.6708456 +0000 UTC m=+0.086036683 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 21:21:09 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/18675691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:21:09 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:21:09 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/46532897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:21:09 compute-1 nova_compute[230183]: 2025-11-23 21:21:09.930 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:21:09 compute-1 nova_compute[230183]: 2025-11-23 21:21:09.937 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:21:09 compute-1 nova_compute[230183]: 2025-11-23 21:21:09.954 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:21:09 compute-1 nova_compute[230183]: 2025-11-23 21:21:09.956 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 23 21:21:09 compute-1 nova_compute[230183]: 2025-11-23 21:21:09.956 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:21:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:10.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:10 compute-1 ceph-mon[80135]: pgmap v1195: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:21:10 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/46532897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:21:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:11.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:21:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:12.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:12 compute-1 podman[252967]: 2025-11-23 21:21:12.637628663 +0000 UTC m=+0.057312847 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible)
Nov 23 21:21:12 compute-1 ceph-mon[80135]: pgmap v1196: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 23 21:21:12 compute-1 nova_compute[230183]: 2025-11-23 21:21:12.957 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:21:12 compute-1 nova_compute[230183]: 2025-11-23 21:21:12.957 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:21:12 compute-1 nova_compute[230183]: 2025-11-23 21:21:12.958 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:21:13 compute-1 nova_compute[230183]: 2025-11-23 21:21:13.413 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:21:13 compute-1 nova_compute[230183]: 2025-11-23 21:21:13.423 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:21:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:21:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:13.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:21:13 compute-1 nova_compute[230183]: 2025-11-23 21:21:13.550 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:21:14 compute-1 nova_compute[230183]: 2025-11-23 21:21:14.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:21:14 compute-1 nova_compute[230183]: 2025-11-23 21:21:14.426 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 23 21:21:14 compute-1 nova_compute[230183]: 2025-11-23 21:21:14.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 23 21:21:14 compute-1 nova_compute[230183]: 2025-11-23 21:21:14.439 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 23 21:21:14 compute-1 nova_compute[230183]: 2025-11-23 21:21:14.439 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:21:14 compute-1 nova_compute[230183]: 2025-11-23 21:21:14.440 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 23 21:21:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:14.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:14 compute-1 ceph-mon[80135]: pgmap v1197: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 21:21:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:15.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:21:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:16.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:21:16 compute-1 ceph-mon[80135]: pgmap v1198: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 23 21:21:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:21:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:21:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:17.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:21:18 compute-1 nova_compute[230183]: 2025-11-23 21:21:18.416 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:21:18 compute-1 nova_compute[230183]: 2025-11-23 21:21:18.552 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:21:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:18.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:18 compute-1 ceph-mon[80135]: pgmap v1199: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 21:21:18 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1849403323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:21:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:21:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:21:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:19.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:21:19 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3325004431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0.
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.861024) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932879861065, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 580, "num_deletes": 251, "total_data_size": 1036719, "memory_usage": 1047368, "flush_reason": "Manual Compaction"}
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932879869583, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 682582, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36374, "largest_seqno": 36949, "table_properties": {"data_size": 679529, "index_size": 1025, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7144, "raw_average_key_size": 19, "raw_value_size": 673450, "raw_average_value_size": 1815, "num_data_blocks": 44, "num_entries": 371, "num_filter_entries": 371, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763932846, "oldest_key_time": 1763932846, "file_creation_time": 1763932879, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 8597 microseconds, and 4477 cpu microseconds.
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.869620) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 682582 bytes OK
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.869638) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.870692) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.870708) EVENT_LOG_v1 {"time_micros": 1763932879870702, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.870725) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1033404, prev total WAL file size 1033404, number of live WAL files 2.
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.871183) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(666KB)], [66(15MB)]
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932879871218, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 17302646, "oldest_snapshot_seqno": -1}
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 6528 keys, 15181559 bytes, temperature: kUnknown
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932879959164, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 15181559, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15137261, "index_size": 26908, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 171742, "raw_average_key_size": 26, "raw_value_size": 15018910, "raw_average_value_size": 2300, "num_data_blocks": 1063, "num_entries": 6528, "num_filter_entries": 6528, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763932879, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.959417) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 15181559 bytes
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.960530) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.6 rd, 172.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 15.9 +0.0 blob) out(14.5 +0.0 blob), read-write-amplify(47.6) write-amplify(22.2) OK, records in: 7042, records dropped: 514 output_compression: NoCompression
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.960551) EVENT_LOG_v1 {"time_micros": 1763932879960541, "job": 40, "event": "compaction_finished", "compaction_time_micros": 88015, "compaction_time_cpu_micros": 38831, "output_level": 6, "num_output_files": 1, "total_output_size": 15181559, "num_input_records": 7042, "num_output_records": 6528, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932879960800, "job": 40, "event": "table_file_deletion", "file_number": 68}
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932879964503, "job": 40, "event": "table_file_deletion", "file_number": 66}
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.871112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.964620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.964626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.964629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.964631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:21:19 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.964633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:21:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:20.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:20 compute-1 ceph-mon[80135]: pgmap v1200: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 21:21:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:21:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:21.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:21:21 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/581265608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:21:22 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:21:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:22.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:22 compute-1 ceph-mon[80135]: pgmap v1201: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 23 21:21:22 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/57458500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:21:23 compute-1 nova_compute[230183]: 2025-11-23 21:21:23.417 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:21:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:21:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:23.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:21:23 compute-1 nova_compute[230183]: 2025-11-23 21:21:23.553 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:21:23 compute-1 sshd-session[252994]: Invalid user solana from 161.35.133.66 port 57110
Nov 23 21:21:23 compute-1 sshd-session[252994]: Connection closed by invalid user solana 161.35.133.66 port 57110 [preauth]
Nov 23 21:21:23 compute-1 ceph-mon[80135]: pgmap v1202: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:21:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:24.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:25.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:25 compute-1 ceph-mon[80135]: pgmap v1203: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:21:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:26.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:27 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:21:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:27.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:28 compute-1 ceph-mon[80135]: pgmap v1204: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:21:28 compute-1 nova_compute[230183]: 2025-11-23 21:21:28.419 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:21:28 compute-1 nova_compute[230183]: 2025-11-23 21:21:28.555 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:21:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:28.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:28 compute-1 sudo[252998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:21:28 compute-1 sudo[252998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:21:28 compute-1 sudo[252998]: pam_unix(sudo:session): session closed for user root
Nov 23 21:21:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:21:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:29.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:21:30 compute-1 ceph-mon[80135]: pgmap v1205: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:21:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:21:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:30.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:21:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:31.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:32 compute-1 ceph-mon[80135]: pgmap v1206: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:21:32 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:21:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:32.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:33 compute-1 nova_compute[230183]: 2025-11-23 21:21:33.421 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:21:33 compute-1 nova_compute[230183]: 2025-11-23 21:21:33.557 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:21:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:33.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:34 compute-1 ceph-mon[80135]: pgmap v1207: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:21:34 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:21:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:34.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:35.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:36 compute-1 ceph-mon[80135]: pgmap v1208: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:21:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:36.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:21:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:37.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:38 compute-1 ceph-mon[80135]: pgmap v1209: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:21:38 compute-1 nova_compute[230183]: 2025-11-23 21:21:38.421 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:21:38 compute-1 nova_compute[230183]: 2025-11-23 21:21:38.559 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:21:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:38.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:39.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:40 compute-1 ceph-mon[80135]: pgmap v1210: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1005 B/s rd, 0 op/s
Nov 23 21:21:40 compute-1 podman[253030]: 2025-11-23 21:21:40.635640898 +0000 UTC m=+0.050160377 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 23 21:21:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:40.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:40 compute-1 podman[253029]: 2025-11-23 21:21:40.706814755 +0000 UTC m=+0.121412976 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 23 21:21:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:41.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:42 compute-1 ceph-mon[80135]: pgmap v1211: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:21:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:21:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:42.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:43 compute-1 nova_compute[230183]: 2025-11-23 21:21:43.423 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:21:43 compute-1 nova_compute[230183]: 2025-11-23 21:21:43.560 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:21:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:43.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:43 compute-1 podman[253075]: 2025-11-23 21:21:43.633753873 +0000 UTC m=+0.048840972 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 23 21:21:44 compute-1 ceph-mon[80135]: pgmap v1212: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1004 B/s rd, 0 op/s
Nov 23 21:21:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:44.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:45.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:46 compute-1 ceph-mon[80135]: pgmap v1213: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:21:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:46.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:21:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:47.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:48 compute-1 ceph-mon[80135]: pgmap v1214: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1004 B/s rd, 0 op/s
Nov 23 21:21:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:21:48 compute-1 nova_compute[230183]: 2025-11-23 21:21:48.423 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:21:48 compute-1 nova_compute[230183]: 2025-11-23 21:21:48.562 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:21:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:48.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:48 compute-1 sudo[253099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:21:48 compute-1 sudo[253099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:21:48 compute-1 sudo[253099]: pam_unix(sudo:session): session closed for user root
Nov 23 21:21:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:49.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:49 compute-1 sudo[253125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:21:49 compute-1 sudo[253125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:21:49 compute-1 sudo[253125]: pam_unix(sudo:session): session closed for user root
Nov 23 21:21:49 compute-1 sudo[253150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 21:21:49 compute-1 sudo[253150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:21:50 compute-1 ceph-mon[80135]: pgmap v1215: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1004 B/s rd, 0 op/s
Nov 23 21:21:50 compute-1 sudo[253150]: pam_unix(sudo:session): session closed for user root
Nov 23 21:21:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:50.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:21:51.079 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:21:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:21:51.080 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:21:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:21:51.080 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:21:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:51.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:51 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:21:51 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:21:51 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:21:51 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 21:21:51 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:21:51 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:21:51 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 21:21:51 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 21:21:51 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:21:52 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:21:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:52.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:52 compute-1 ceph-mon[80135]: pgmap v1216: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:21:52 compute-1 ceph-mon[80135]: pgmap v1217: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 21:21:53 compute-1 nova_compute[230183]: 2025-11-23 21:21:53.461 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:21:53 compute-1 nova_compute[230183]: 2025-11-23 21:21:53.564 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:21:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:53.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:54.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:54 compute-1 ceph-mon[80135]: pgmap v1218: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 21:21:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:55.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:55 compute-1 ceph-mon[80135]: pgmap v1219: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 21:21:56 compute-1 sudo[253208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 21:21:56 compute-1 sudo[253208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:21:56 compute-1 sudo[253208]: pam_unix(sudo:session): session closed for user root
Nov 23 21:21:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:21:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:56.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:21:56 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:21:56 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:21:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:21:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:21:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:57.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:21:58 compute-1 ceph-mon[80135]: pgmap v1220: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 21:21:58 compute-1 sshd[168248]: Timeout before authentication for connection from 67.201.33.12 to 38.102.83.106, pid = 252334
Nov 23 21:21:58 compute-1 nova_compute[230183]: 2025-11-23 21:21:58.464 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:21:58 compute-1 nova_compute[230183]: 2025-11-23 21:21:58.564 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:21:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:21:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:58.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:21:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:21:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:21:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:59.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:22:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:00.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:00 compute-1 ceph-mon[80135]: pgmap v1221: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 23 21:22:01 compute-1 nova_compute[230183]: 2025-11-23 21:22:01.428 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:22:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:01.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:01 compute-1 ceph-mon[80135]: pgmap v1222: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 23 21:22:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:22:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:22:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:02.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:22:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:22:03 compute-1 nova_compute[230183]: 2025-11-23 21:22:03.466 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:22:03 compute-1 nova_compute[230183]: 2025-11-23 21:22:03.565 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:22:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:22:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:03.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:22:04 compute-1 ceph-mon[80135]: pgmap v1223: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:22:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:04.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:05.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:06.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:06 compute-1 sshd-session[253238]: Invalid user eth from 92.118.39.92 port 41856
Nov 23 21:22:06 compute-1 ceph-mon[80135]: pgmap v1224: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:22:06 compute-1 sshd-session[253238]: Connection closed by invalid user eth 92.118.39.92 port 41856 [preauth]
Nov 23 21:22:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:22:07 compute-1 nova_compute[230183]: 2025-11-23 21:22:07.440 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:22:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:07.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:08 compute-1 nova_compute[230183]: 2025-11-23 21:22:08.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:22:08 compute-1 nova_compute[230183]: 2025-11-23 21:22:08.460 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:22:08 compute-1 nova_compute[230183]: 2025-11-23 21:22:08.461 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:22:08 compute-1 nova_compute[230183]: 2025-11-23 21:22:08.461 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:22:08 compute-1 nova_compute[230183]: 2025-11-23 21:22:08.461 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:22:08 compute-1 nova_compute[230183]: 2025-11-23 21:22:08.461 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:22:08 compute-1 nova_compute[230183]: 2025-11-23 21:22:08.501 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:22:08 compute-1 nova_compute[230183]: 2025-11-23 21:22:08.568 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:22:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:08.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:08 compute-1 sudo[253261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:22:08 compute-1 sudo[253261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:22:08 compute-1 sudo[253261]: pam_unix(sudo:session): session closed for user root
Nov 23 21:22:08 compute-1 ceph-mon[80135]: pgmap v1225: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:22:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/1782666011' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:22:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/1782666011' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:22:08 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:22:08 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1469669657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:22:08 compute-1 nova_compute[230183]: 2025-11-23 21:22:08.931 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:22:09 compute-1 nova_compute[230183]: 2025-11-23 21:22:09.085 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:22:09 compute-1 nova_compute[230183]: 2025-11-23 21:22:09.086 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4861MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:22:09 compute-1 nova_compute[230183]: 2025-11-23 21:22:09.087 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:22:09 compute-1 nova_compute[230183]: 2025-11-23 21:22:09.087 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:22:09 compute-1 nova_compute[230183]: 2025-11-23 21:22:09.311 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:22:09 compute-1 nova_compute[230183]: 2025-11-23 21:22:09.311 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:22:09 compute-1 nova_compute[230183]: 2025-11-23 21:22:09.462 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing inventories for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 23 21:22:09 compute-1 nova_compute[230183]: 2025-11-23 21:22:09.583 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updating ProviderTree inventory for provider bb217351-d4c8-44a4-9137-08393a1f72bc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 23 21:22:09 compute-1 nova_compute[230183]: 2025-11-23 21:22:09.583 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updating inventory in ProviderTree for provider bb217351-d4c8-44a4-9137-08393a1f72bc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 23 21:22:09 compute-1 nova_compute[230183]: 2025-11-23 21:22:09.610 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing aggregate associations for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 23 21:22:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:09.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:09 compute-1 nova_compute[230183]: 2025-11-23 21:22:09.644 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing trait associations for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc, traits: COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,HW_CPU_X86_AVX,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SVM,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 23 21:22:09 compute-1 nova_compute[230183]: 2025-11-23 21:22:09.661 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:22:09 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1469669657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:22:09 compute-1 ceph-mon[80135]: pgmap v1226: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:22:10 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:22:10 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3107853304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:22:10 compute-1 nova_compute[230183]: 2025-11-23 21:22:10.127 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:22:10 compute-1 nova_compute[230183]: 2025-11-23 21:22:10.133 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:22:10 compute-1 nova_compute[230183]: 2025-11-23 21:22:10.147 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:22:10 compute-1 nova_compute[230183]: 2025-11-23 21:22:10.149 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 23 21:22:10 compute-1 nova_compute[230183]: 2025-11-23 21:22:10.149 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:22:10 compute-1 nova_compute[230183]: 2025-11-23 21:22:10.150 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:22:10 compute-1 nova_compute[230183]: 2025-11-23 21:22:10.150 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 23 21:22:10 compute-1 nova_compute[230183]: 2025-11-23 21:22:10.439 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:22:10 compute-1 nova_compute[230183]: 2025-11-23 21:22:10.439 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:22:10 compute-1 nova_compute[230183]: 2025-11-23 21:22:10.440 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 23 21:22:10 compute-1 nova_compute[230183]: 2025-11-23 21:22:10.452 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 23 21:22:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:22:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:10.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:22:10 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3107853304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:22:11 compute-1 nova_compute[230183]: 2025-11-23 21:22:11.440 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:22:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:22:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:11.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:22:11 compute-1 podman[253312]: 2025-11-23 21:22:11.627683234 +0000 UTC m=+0.044609469 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 23 21:22:11 compute-1 podman[253311]: 2025-11-23 21:22:11.656930373 +0000 UTC m=+0.074206207 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 21:22:11 compute-1 ceph-mon[80135]: pgmap v1227: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:22:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:22:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:12.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:13 compute-1 nova_compute[230183]: 2025-11-23 21:22:13.422 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:22:13 compute-1 nova_compute[230183]: 2025-11-23 21:22:13.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:22:13 compute-1 nova_compute[230183]: 2025-11-23 21:22:13.501 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:22:13 compute-1 nova_compute[230183]: 2025-11-23 21:22:13.569 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:22:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:13.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:14 compute-1 nova_compute[230183]: 2025-11-23 21:22:14.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:22:14 compute-1 nova_compute[230183]: 2025-11-23 21:22:14.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:22:14 compute-1 nova_compute[230183]: 2025-11-23 21:22:14.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 23 21:22:14 compute-1 podman[253359]: 2025-11-23 21:22:14.649628875 +0000 UTC m=+0.062932846 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 23 21:22:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:14.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:14 compute-1 ceph-mon[80135]: pgmap v1228: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:22:15 compute-1 nova_compute[230183]: 2025-11-23 21:22:15.428 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:22:15 compute-1 nova_compute[230183]: 2025-11-23 21:22:15.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 23 21:22:15 compute-1 nova_compute[230183]: 2025-11-23 21:22:15.429 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 23 21:22:15 compute-1 nova_compute[230183]: 2025-11-23 21:22:15.447 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 23 21:22:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:15.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:15 compute-1 ceph-mon[80135]: pgmap v1229: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:22:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:16.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:22:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:17.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:18 compute-1 nova_compute[230183]: 2025-11-23 21:22:18.442 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:22:18 compute-1 nova_compute[230183]: 2025-11-23 21:22:18.503 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:22:18 compute-1 nova_compute[230183]: 2025-11-23 21:22:18.570 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:22:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:22:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:18.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:22:18 compute-1 ceph-mon[80135]: pgmap v1230: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:22:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:22:18 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3698910792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:22:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:19.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:19 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1267239501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:22:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:20.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:20 compute-1 ceph-mon[80135]: pgmap v1231: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:22:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:22:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:21.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:22:21 compute-1 ceph-mon[80135]: pgmap v1232: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:22:22 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:22:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:22.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:23 compute-1 nova_compute[230183]: 2025-11-23 21:22:23.507 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:22:23 compute-1 nova_compute[230183]: 2025-11-23 21:22:23.571 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:22:23 compute-1 nova_compute[230183]: 2025-11-23 21:22:23.573 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:22:23 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/304187688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:22:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:23.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:24 compute-1 ceph-mon[80135]: pgmap v1233: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:22:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:24.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:25.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:25 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2963775118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:22:26 compute-1 ceph-mon[80135]: pgmap v1234: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:22:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:26.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:27 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:22:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:27.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:27 compute-1 ceph-mon[80135]: pgmap v1235: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:22:28 compute-1 nova_compute[230183]: 2025-11-23 21:22:28.508 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:22:28 compute-1 nova_compute[230183]: 2025-11-23 21:22:28.574 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:22:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:22:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:28.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:22:28 compute-1 sudo[253386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:22:28 compute-1 sudo[253386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:22:28 compute-1 sudo[253386]: pam_unix(sudo:session): session closed for user root
Nov 23 21:22:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:29.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:30.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0.
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.829204) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932950829234, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 958, "num_deletes": 250, "total_data_size": 2084921, "memory_usage": 2117176, "flush_reason": "Manual Compaction"}
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932950837283, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 901115, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36954, "largest_seqno": 37907, "table_properties": {"data_size": 897476, "index_size": 1355, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9792, "raw_average_key_size": 20, "raw_value_size": 889693, "raw_average_value_size": 1901, "num_data_blocks": 58, "num_entries": 468, "num_filter_entries": 468, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763932880, "oldest_key_time": 1763932880, "file_creation_time": 1763932950, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 8131 microseconds, and 3610 cpu microseconds.
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.837329) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 901115 bytes OK
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.837350) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.838674) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.838688) EVENT_LOG_v1 {"time_micros": 1763932950838683, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.838705) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 2080125, prev total WAL file size 2080125, number of live WAL files 2.
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.839522) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303036' seq:72057594037927935, type:22 .. '6D6772737461740031323537' seq:0, type:0; will stop at (end)
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(879KB)], [69(14MB)]
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932950839657, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 16082674, "oldest_snapshot_seqno": -1}
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 6509 keys, 12457430 bytes, temperature: kUnknown
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932950945675, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 12457430, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12417137, "index_size": 22903, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 171528, "raw_average_key_size": 26, "raw_value_size": 12303004, "raw_average_value_size": 1890, "num_data_blocks": 897, "num_entries": 6509, "num_filter_entries": 6509, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763932950, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:22:30 compute-1 ceph-mon[80135]: pgmap v1236: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.945941) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 12457430 bytes
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.948639) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.6 rd, 117.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 14.5 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(31.7) write-amplify(13.8) OK, records in: 6996, records dropped: 487 output_compression: NoCompression
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.948659) EVENT_LOG_v1 {"time_micros": 1763932950948650, "job": 42, "event": "compaction_finished", "compaction_time_micros": 106082, "compaction_time_cpu_micros": 31243, "output_level": 6, "num_output_files": 1, "total_output_size": 12457430, "num_input_records": 6996, "num_output_records": 6509, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932950949057, "job": 42, "event": "table_file_deletion", "file_number": 71}
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932950952122, "job": 42, "event": "table_file_deletion", "file_number": 69}
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.839341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.952361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.952373) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.952376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.952378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:22:30 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.952381) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:22:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:22:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:31.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:22:31 compute-1 ceph-mon[80135]: pgmap v1237: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:22:32 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:22:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:32.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:22:33 compute-1 nova_compute[230183]: 2025-11-23 21:22:33.510 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:22:33 compute-1 nova_compute[230183]: 2025-11-23 21:22:33.574 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:22:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:33.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:34 compute-1 ceph-mon[80135]: pgmap v1238: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:22:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:34.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:35.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:36.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:36 compute-1 ceph-mon[80135]: pgmap v1239: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:22:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:22:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:22:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:37.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:22:38 compute-1 ceph-mon[80135]: pgmap v1240: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:22:38 compute-1 nova_compute[230183]: 2025-11-23 21:22:38.576 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:22:38 compute-1 nova_compute[230183]: 2025-11-23 21:22:38.578 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:22:38 compute-1 nova_compute[230183]: 2025-11-23 21:22:38.578 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 23 21:22:38 compute-1 nova_compute[230183]: 2025-11-23 21:22:38.578 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:22:38 compute-1 nova_compute[230183]: 2025-11-23 21:22:38.727 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:22:38 compute-1 nova_compute[230183]: 2025-11-23 21:22:38.727 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:22:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:38.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:39.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:40.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:40 compute-1 ceph-mon[80135]: pgmap v1241: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:22:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:41.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:41 compute-1 ceph-mon[80135]: pgmap v1242: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:22:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:22:42 compute-1 podman[253419]: 2025-11-23 21:22:42.645073153 +0000 UTC m=+0.054005800 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 21:22:42 compute-1 podman[253418]: 2025-11-23 21:22:42.668838026 +0000 UTC m=+0.084592784 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller)
Nov 23 21:22:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:22:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:42.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:22:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:43.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:43 compute-1 nova_compute[230183]: 2025-11-23 21:22:43.728 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:22:43 compute-1 nova_compute[230183]: 2025-11-23 21:22:43.729 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:22:43 compute-1 nova_compute[230183]: 2025-11-23 21:22:43.729 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 23 21:22:43 compute-1 nova_compute[230183]: 2025-11-23 21:22:43.729 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:22:43 compute-1 nova_compute[230183]: 2025-11-23 21:22:43.729 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:22:43 compute-1 nova_compute[230183]: 2025-11-23 21:22:43.731 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:22:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:44.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:44 compute-1 ceph-mon[80135]: pgmap v1243: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:22:45 compute-1 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Nov 23 21:22:45 compute-1 podman[253463]: 2025-11-23 21:22:45.642736387 +0000 UTC m=+0.058217452 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 23 21:22:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:45.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:45 compute-1 ceph-mon[80135]: pgmap v1244: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:22:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:46.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:22:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:47.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:48 compute-1 nova_compute[230183]: 2025-11-23 21:22:48.731 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:22:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:48.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:48 compute-1 ceph-mon[80135]: pgmap v1245: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:22:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:22:49 compute-1 sudo[253483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:22:49 compute-1 sudo[253483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:22:49 compute-1 sudo[253483]: pam_unix(sudo:session): session closed for user root
Nov 23 21:22:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:49.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:49 compute-1 ceph-mon[80135]: pgmap v1246: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 174 op/s
Nov 23 21:22:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:50.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:22:51.081 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:22:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:22:51.081 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:22:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:22:51.081 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:22:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:51.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:51 compute-1 ceph-mon[80135]: pgmap v1247: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 174 op/s
Nov 23 21:22:52 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:22:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:22:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:52.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:22:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:53.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:53 compute-1 nova_compute[230183]: 2025-11-23 21:22:53.732 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:22:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:54.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:54 compute-1 ceph-mon[80135]: pgmap v1248: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 174 op/s
Nov 23 21:22:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:55.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:55 compute-1 ceph-mon[80135]: pgmap v1249: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 174 op/s
Nov 23 21:22:56 compute-1 sudo[253513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:22:56 compute-1 sudo[253513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:22:56 compute-1 sudo[253513]: pam_unix(sudo:session): session closed for user root
Nov 23 21:22:56 compute-1 sudo[253538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 21:22:56 compute-1 sudo[253538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:22:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:22:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:56.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:22:57 compute-1 sudo[253538]: pam_unix(sudo:session): session closed for user root
Nov 23 21:22:57 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:22:57 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 21:22:57 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:22:57 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:22:57 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 21:22:57 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 21:22:57 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:22:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:22:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:57.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:58 compute-1 ceph-mon[80135]: pgmap v1250: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 0 B/s wr, 182 op/s
Nov 23 21:22:58 compute-1 nova_compute[230183]: 2025-11-23 21:22:58.733 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:22:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:58.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:22:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:22:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:22:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:59.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:00 compute-1 ceph-mon[80135]: pgmap v1251: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 0 B/s wr, 182 op/s
Nov 23 21:23:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:23:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:00.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:23:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:01.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:02 compute-1 ceph-mon[80135]: pgmap v1252: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 23 21:23:02 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:23:02 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:23:02 compute-1 sudo[253598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 21:23:02 compute-1 sudo[253598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:23:02 compute-1 sudo[253598]: pam_unix(sudo:session): session closed for user root
Nov 23 21:23:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:23:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:02.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:23:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:03.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:03 compute-1 nova_compute[230183]: 2025-11-23 21:23:03.736 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:23:04 compute-1 ceph-mon[80135]: pgmap v1253: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 23 21:23:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:23:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:04.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:23:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:05.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:06 compute-1 ceph-mon[80135]: pgmap v1254: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Nov 23 21:23:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:23:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:06.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:23:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:23:07 compute-1 nova_compute[230183]: 2025-11-23 21:23:07.443 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:23:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:07.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:08 compute-1 ceph-mon[80135]: pgmap v1255: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 23 21:23:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/2686989106' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:23:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/2686989106' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:23:08 compute-1 nova_compute[230183]: 2025-11-23 21:23:08.738 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:23:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:08.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:09 compute-1 sudo[253626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:23:09 compute-1 sudo[253626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:23:09 compute-1 sudo[253626]: pam_unix(sudo:session): session closed for user root
Nov 23 21:23:09 compute-1 nova_compute[230183]: 2025-11-23 21:23:09.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:23:09 compute-1 nova_compute[230183]: 2025-11-23 21:23:09.465 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:23:09 compute-1 nova_compute[230183]: 2025-11-23 21:23:09.466 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:23:09 compute-1 nova_compute[230183]: 2025-11-23 21:23:09.466 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:23:09 compute-1 nova_compute[230183]: 2025-11-23 21:23:09.466 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:23:09 compute-1 nova_compute[230183]: 2025-11-23 21:23:09.466 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:23:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:23:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:09.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:23:09 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:23:09 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/438076266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:23:09 compute-1 nova_compute[230183]: 2025-11-23 21:23:09.903 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:23:10 compute-1 nova_compute[230183]: 2025-11-23 21:23:10.177 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:23:10 compute-1 nova_compute[230183]: 2025-11-23 21:23:10.178 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4866MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:23:10 compute-1 nova_compute[230183]: 2025-11-23 21:23:10.178 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:23:10 compute-1 nova_compute[230183]: 2025-11-23 21:23:10.179 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:23:10 compute-1 nova_compute[230183]: 2025-11-23 21:23:10.296 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:23:10 compute-1 nova_compute[230183]: 2025-11-23 21:23:10.296 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:23:10 compute-1 nova_compute[230183]: 2025-11-23 21:23:10.339 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:23:10 compute-1 ceph-mon[80135]: pgmap v1256: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:23:10 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/438076266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:23:10 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:23:10 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/438593123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:23:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:10.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:10 compute-1 nova_compute[230183]: 2025-11-23 21:23:10.787 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:23:10 compute-1 nova_compute[230183]: 2025-11-23 21:23:10.794 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:23:10 compute-1 nova_compute[230183]: 2025-11-23 21:23:10.858 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:23:10 compute-1 nova_compute[230183]: 2025-11-23 21:23:10.863 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 23 21:23:10 compute-1 nova_compute[230183]: 2025-11-23 21:23:10.863 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:23:11 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/438593123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:23:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:23:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:11.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:23:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:23:12 compute-1 ceph-mon[80135]: pgmap v1257: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:23:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:23:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:12.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:23:12 compute-1 nova_compute[230183]: 2025-11-23 21:23:12.864 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:23:12 compute-1 nova_compute[230183]: 2025-11-23 21:23:12.864 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:23:13 compute-1 podman[253698]: 2025-11-23 21:23:13.645834151 +0000 UTC m=+0.049328594 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 21:23:13 compute-1 podman[253697]: 2025-11-23 21:23:13.679615431 +0000 UTC m=+0.084254825 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Nov 23 21:23:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:13.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:13 compute-1 nova_compute[230183]: 2025-11-23 21:23:13.740 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:23:14 compute-1 nova_compute[230183]: 2025-11-23 21:23:14.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:23:14 compute-1 nova_compute[230183]: 2025-11-23 21:23:14.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:23:14 compute-1 nova_compute[230183]: 2025-11-23 21:23:14.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 23 21:23:14 compute-1 ceph-mon[80135]: pgmap v1258: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:23:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:14.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:15 compute-1 nova_compute[230183]: 2025-11-23 21:23:15.423 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:23:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:15.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:16 compute-1 nova_compute[230183]: 2025-11-23 21:23:16.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:23:16 compute-1 nova_compute[230183]: 2025-11-23 21:23:16.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 23 21:23:16 compute-1 nova_compute[230183]: 2025-11-23 21:23:16.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 23 21:23:16 compute-1 nova_compute[230183]: 2025-11-23 21:23:16.463 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 23 21:23:16 compute-1 nova_compute[230183]: 2025-11-23 21:23:16.463 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:23:16 compute-1 ceph-mon[80135]: pgmap v1259: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:23:16 compute-1 podman[253743]: 2025-11-23 21:23:16.665703676 +0000 UTC m=+0.074150919 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 23 21:23:16 compute-1 nova_compute[230183]: 2025-11-23 21:23:16.779 230187 DEBUG oslo_concurrency.processutils [None req-d89b790b-8376-465b-8448-23090b964ac1 8c34b8adab3049c9b4e37e075333da23 3f8fb5175f85402ba20cf9c6989d47cf - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:23:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:16.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:16 compute-1 nova_compute[230183]: 2025-11-23 21:23:16.813 230187 DEBUG oslo_concurrency.processutils [None req-d89b790b-8376-465b-8448-23090b964ac1 8c34b8adab3049c9b4e37e075333da23 3f8fb5175f85402ba20cf9c6989d47cf - - default default] CMD "env LANG=C uptime" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:23:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:23:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:17.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:18 compute-1 nova_compute[230183]: 2025-11-23 21:23:18.742 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:23:18 compute-1 nova_compute[230183]: 2025-11-23 21:23:18.743 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:23:18 compute-1 nova_compute[230183]: 2025-11-23 21:23:18.744 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 23 21:23:18 compute-1 nova_compute[230183]: 2025-11-23 21:23:18.744 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:23:18 compute-1 nova_compute[230183]: 2025-11-23 21:23:18.745 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:23:18 compute-1 nova_compute[230183]: 2025-11-23 21:23:18.746 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:23:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:18.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:18 compute-1 ceph-mon[80135]: pgmap v1260: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:23:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:23:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:23:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:19.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:23:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:20.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:20 compute-1 ceph-mon[80135]: pgmap v1261: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:23:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:23:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:21.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:23:21 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3527708452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:23:22 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:23:22.008 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 23 21:23:22 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:23:22.009 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 23 21:23:22 compute-1 nova_compute[230183]: 2025-11-23 21:23:22.058 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:23:22 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:23:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:23:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:22.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:23:22 compute-1 ceph-mon[80135]: pgmap v1262: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:23:22 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3272348093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:23:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:23:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:23.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:23:23 compute-1 nova_compute[230183]: 2025-11-23 21:23:23.745 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:23:23 compute-1 nova_compute[230183]: 2025-11-23 21:23:23.748 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:23:23 compute-1 ceph-mon[80135]: pgmap v1263: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:23:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:24.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:23:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:25.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:23:26 compute-1 ceph-mon[80135]: pgmap v1264: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:23:26 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2338104820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:23:26 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1352909187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:23:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:23:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:26.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:23:27 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:23:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:23:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:27.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:23:28 compute-1 ceph-mon[80135]: pgmap v1265: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:23:28 compute-1 nova_compute[230183]: 2025-11-23 21:23:28.747 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:23:28 compute-1 nova_compute[230183]: 2025-11-23 21:23:28.749 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:23:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:28.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:29 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:23:29.011 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 23 21:23:29 compute-1 sudo[253772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:23:29 compute-1 sudo[253772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:23:29 compute-1 sudo[253772]: pam_unix(sudo:session): session closed for user root
Nov 23 21:23:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:29.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:30 compute-1 ceph-mon[80135]: pgmap v1266: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:23:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:30.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:31.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:32 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:23:32 compute-1 ceph-mon[80135]: pgmap v1267: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:23:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:32.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:23:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:33.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:33 compute-1 nova_compute[230183]: 2025-11-23 21:23:33.749 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:23:33 compute-1 nova_compute[230183]: 2025-11-23 21:23:33.750 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:23:34 compute-1 ceph-mon[80135]: pgmap v1268: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:23:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:34.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:35.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:36 compute-1 ceph-mon[80135]: pgmap v1269: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:23:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:23:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:36.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:23:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:23:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:37.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:38 compute-1 ceph-mon[80135]: pgmap v1270: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1011 B/s rd, 0 op/s
Nov 23 21:23:38 compute-1 nova_compute[230183]: 2025-11-23 21:23:38.751 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:23:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:38.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:23:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:39.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:23:40 compute-1 ceph-mon[80135]: pgmap v1271: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:23:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:40.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:23:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:41.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:23:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:23:42 compute-1 ceph-mon[80135]: pgmap v1272: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:23:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:42.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:43.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:43 compute-1 nova_compute[230183]: 2025-11-23 21:23:43.753 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:23:44 compute-1 ceph-mon[80135]: pgmap v1273: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1011 B/s rd, 0 op/s
Nov 23 21:23:44 compute-1 podman[253806]: 2025-11-23 21:23:44.663756652 +0000 UTC m=+0.070319547 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 21:23:44 compute-1 podman[253805]: 2025-11-23 21:23:44.691201634 +0000 UTC m=+0.104552370 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 21:23:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:44.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:45.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:46 compute-1 ceph-mon[80135]: pgmap v1274: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:23:46 compute-1 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 21:23:46 compute-1 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 21:23:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:23:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:46.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:23:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:23:47 compute-1 podman[253852]: 2025-11-23 21:23:47.687380234 +0000 UTC m=+0.100591395 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 21:23:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:47.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:48 compute-1 ceph-mon[80135]: pgmap v1275: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:23:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:23:48 compute-1 nova_compute[230183]: 2025-11-23 21:23:48.755 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:23:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:48.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:49 compute-1 sudo[253874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:23:49 compute-1 sudo[253874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:23:49 compute-1 sudo[253874]: pam_unix(sudo:session): session closed for user root
Nov 23 21:23:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:49.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:50 compute-1 ceph-mon[80135]: pgmap v1276: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:23:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:50.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:23:51.083 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:23:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:23:51.083 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:23:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:23:51.084 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:23:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:51.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:52 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:23:52 compute-1 ceph-mon[80135]: pgmap v1277: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:23:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:52.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:53 compute-1 nova_compute[230183]: 2025-11-23 21:23:53.756 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:23:53 compute-1 nova_compute[230183]: 2025-11-23 21:23:53.758 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:23:53 compute-1 nova_compute[230183]: 2025-11-23 21:23:53.758 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 23 21:23:53 compute-1 nova_compute[230183]: 2025-11-23 21:23:53.758 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:23:53 compute-1 nova_compute[230183]: 2025-11-23 21:23:53.759 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:23:53 compute-1 nova_compute[230183]: 2025-11-23 21:23:53.760 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:23:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:23:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:53.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:23:54 compute-1 ceph-mon[80135]: pgmap v1278: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:23:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:54.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:55.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:56 compute-1 ceph-mon[80135]: pgmap v1279: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:23:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:56.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:23:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:57.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:58 compute-1 ceph-mon[80135]: pgmap v1280: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:23:58 compute-1 nova_compute[230183]: 2025-11-23 21:23:58.761 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:23:58 compute-1 nova_compute[230183]: 2025-11-23 21:23:58.762 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:23:58 compute-1 nova_compute[230183]: 2025-11-23 21:23:58.762 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 23 21:23:58 compute-1 nova_compute[230183]: 2025-11-23 21:23:58.762 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:23:58 compute-1 nova_compute[230183]: 2025-11-23 21:23:58.782 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:23:58 compute-1 nova_compute[230183]: 2025-11-23 21:23:58.783 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:23:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:58.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:23:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:23:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:23:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:59.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:00 compute-1 ceph-mon[80135]: pgmap v1281: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:24:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:00.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:01.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:24:02 compute-1 sudo[253905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:24:02 compute-1 sudo[253905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:24:02 compute-1 sudo[253905]: pam_unix(sudo:session): session closed for user root
Nov 23 21:24:02 compute-1 sudo[253930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 21:24:02 compute-1 sudo[253930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:24:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:02.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:03 compute-1 ceph-mon[80135]: pgmap v1282: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:24:03 compute-1 sudo[253930]: pam_unix(sudo:session): session closed for user root
Nov 23 21:24:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:24:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:03.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:24:03 compute-1 nova_compute[230183]: 2025-11-23 21:24:03.784 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:24:04 compute-1 sshd-session[253988]: Invalid user solana from 161.35.133.66 port 45538
Nov 23 21:24:04 compute-1 sshd-session[253988]: Connection closed by invalid user solana 161.35.133.66 port 45538 [preauth]
Nov 23 21:24:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:24:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:24:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 21:24:04 compute-1 ceph-mon[80135]: pgmap v1283: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 23 21:24:04 compute-1 ceph-mon[80135]: pgmap v1284: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 21:24:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:24:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:24:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 21:24:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 21:24:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:24:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:24:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:04.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:24:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:05.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:06 compute-1 ceph-mon[80135]: pgmap v1285: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 21:24:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:06.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:07 compute-1 nova_compute[230183]: 2025-11-23 21:24:07.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:24:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:24:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:07.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:08 compute-1 ceph-mon[80135]: pgmap v1286: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 21:24:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/926478707' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:24:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/926478707' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:24:08 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:24:08 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:24:08 compute-1 sudo[253992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 21:24:08 compute-1 sudo[253992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:24:08 compute-1 sudo[253992]: pam_unix(sudo:session): session closed for user root
Nov 23 21:24:08 compute-1 nova_compute[230183]: 2025-11-23 21:24:08.785 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:24:08 compute-1 nova_compute[230183]: 2025-11-23 21:24:08.787 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:24:08 compute-1 nova_compute[230183]: 2025-11-23 21:24:08.788 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 23 21:24:08 compute-1 nova_compute[230183]: 2025-11-23 21:24:08.788 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:24:08 compute-1 nova_compute[230183]: 2025-11-23 21:24:08.788 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:24:08 compute-1 nova_compute[230183]: 2025-11-23 21:24:08.790 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:24:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:08.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:24:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:09.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:24:09 compute-1 sudo[254018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:24:09 compute-1 sudo[254018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:24:09 compute-1 sudo[254018]: pam_unix(sudo:session): session closed for user root
Nov 23 21:24:10 compute-1 ceph-mon[80135]: pgmap v1287: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 21:24:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:10.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:11 compute-1 nova_compute[230183]: 2025-11-23 21:24:11.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:24:11 compute-1 nova_compute[230183]: 2025-11-23 21:24:11.454 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:24:11 compute-1 nova_compute[230183]: 2025-11-23 21:24:11.454 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:24:11 compute-1 nova_compute[230183]: 2025-11-23 21:24:11.454 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:24:11 compute-1 nova_compute[230183]: 2025-11-23 21:24:11.455 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:24:11 compute-1 nova_compute[230183]: 2025-11-23 21:24:11.455 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:24:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:24:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:11.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:24:11 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:24:11 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1184943175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:24:11 compute-1 nova_compute[230183]: 2025-11-23 21:24:11.886 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:24:12 compute-1 nova_compute[230183]: 2025-11-23 21:24:12.047 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:24:12 compute-1 nova_compute[230183]: 2025-11-23 21:24:12.048 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4856MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:24:12 compute-1 nova_compute[230183]: 2025-11-23 21:24:12.048 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:24:12 compute-1 nova_compute[230183]: 2025-11-23 21:24:12.048 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:24:12 compute-1 nova_compute[230183]: 2025-11-23 21:24:12.123 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:24:12 compute-1 nova_compute[230183]: 2025-11-23 21:24:12.124 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:24:12 compute-1 nova_compute[230183]: 2025-11-23 21:24:12.150 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:24:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:24:12 compute-1 ceph-mon[80135]: pgmap v1288: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 21:24:12 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1184943175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:24:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:24:12 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4061681963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:24:12 compute-1 nova_compute[230183]: 2025-11-23 21:24:12.585 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:24:12 compute-1 nova_compute[230183]: 2025-11-23 21:24:12.591 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:24:12 compute-1 nova_compute[230183]: 2025-11-23 21:24:12.612 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:24:12 compute-1 nova_compute[230183]: 2025-11-23 21:24:12.614 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 23 21:24:12 compute-1 nova_compute[230183]: 2025-11-23 21:24:12.614 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:24:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:12.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:13 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/4061681963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:24:13 compute-1 nova_compute[230183]: 2025-11-23 21:24:13.614 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:24:13 compute-1 nova_compute[230183]: 2025-11-23 21:24:13.614 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:24:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:13.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:13 compute-1 nova_compute[230183]: 2025-11-23 21:24:13.789 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:24:14 compute-1 nova_compute[230183]: 2025-11-23 21:24:14.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:24:14 compute-1 nova_compute[230183]: 2025-11-23 21:24:14.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 23 21:24:14 compute-1 ceph-mon[80135]: pgmap v1289: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 21:24:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:14.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:15 compute-1 podman[254090]: 2025-11-23 21:24:15.650769938 +0000 UTC m=+0.055089141 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 21:24:15 compute-1 podman[254089]: 2025-11-23 21:24:15.685650498 +0000 UTC m=+0.096636058 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 23 21:24:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:24:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:15.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:24:16 compute-1 nova_compute[230183]: 2025-11-23 21:24:16.423 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:24:16 compute-1 nova_compute[230183]: 2025-11-23 21:24:16.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:24:16 compute-1 nova_compute[230183]: 2025-11-23 21:24:16.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 23 21:24:16 compute-1 nova_compute[230183]: 2025-11-23 21:24:16.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 23 21:24:16 compute-1 nova_compute[230183]: 2025-11-23 21:24:16.445 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 23 21:24:16 compute-1 nova_compute[230183]: 2025-11-23 21:24:16.446 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:24:16 compute-1 nova_compute[230183]: 2025-11-23 21:24:16.446 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:24:16 compute-1 ceph-mon[80135]: pgmap v1290: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:24:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:24:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:16.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:24:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:24:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:17.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:18 compute-1 ceph-mon[80135]: pgmap v1291: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:24:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:24:18 compute-1 podman[254135]: 2025-11-23 21:24:18.651829948 +0000 UTC m=+0.071085808 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd)
Nov 23 21:24:18 compute-1 nova_compute[230183]: 2025-11-23 21:24:18.791 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:24:18 compute-1 nova_compute[230183]: 2025-11-23 21:24:18.794 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:24:18 compute-1 nova_compute[230183]: 2025-11-23 21:24:18.794 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 23 21:24:18 compute-1 nova_compute[230183]: 2025-11-23 21:24:18.794 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:24:18 compute-1 nova_compute[230183]: 2025-11-23 21:24:18.842 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:24:18 compute-1 nova_compute[230183]: 2025-11-23 21:24:18.843 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:24:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:18.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:19.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:20 compute-1 ceph-mon[80135]: pgmap v1292: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:24:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:20.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:24:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:21.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:24:22 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:24:22 compute-1 ceph-mon[80135]: pgmap v1293: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:24:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:22.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:23 compute-1 nova_compute[230183]: 2025-11-23 21:24:23.442 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:24:23 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2216084231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:24:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:23.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:23 compute-1 nova_compute[230183]: 2025-11-23 21:24:23.844 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:24:24 compute-1 ceph-mon[80135]: pgmap v1294: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:24:24 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3597113063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:24:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:24.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:25.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:26 compute-1 ceph-mon[80135]: pgmap v1295: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:24:26 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1597342613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:24:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:26.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:27 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:24:27 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2394746220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:24:27 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:24:27 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2394746220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:24:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:27.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:28 compute-1 ceph-mon[80135]: pgmap v1296: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:24:28 compute-1 nova_compute[230183]: 2025-11-23 21:24:28.845 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:24:28 compute-1 nova_compute[230183]: 2025-11-23 21:24:28.847 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:24:28 compute-1 nova_compute[230183]: 2025-11-23 21:24:28.847 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 23 21:24:28 compute-1 nova_compute[230183]: 2025-11-23 21:24:28.847 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:24:28 compute-1 nova_compute[230183]: 2025-11-23 21:24:28.886 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:24:28 compute-1 nova_compute[230183]: 2025-11-23 21:24:28.886 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:24:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:24:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:28.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:24:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:24:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:29.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:24:29 compute-1 sudo[254161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:24:29 compute-1 sudo[254161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:24:29 compute-1 sudo[254161]: pam_unix(sudo:session): session closed for user root
Nov 23 21:24:30 compute-1 ceph-mon[80135]: pgmap v1297: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:24:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:30.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:31.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:32 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:24:32 compute-1 ceph-mon[80135]: pgmap v1298: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:24:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:32.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:24:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:33.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:33 compute-1 nova_compute[230183]: 2025-11-23 21:24:33.888 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:24:33 compute-1 nova_compute[230183]: 2025-11-23 21:24:33.888 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:24:33 compute-1 nova_compute[230183]: 2025-11-23 21:24:33.889 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 23 21:24:33 compute-1 nova_compute[230183]: 2025-11-23 21:24:33.889 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:24:33 compute-1 nova_compute[230183]: 2025-11-23 21:24:33.890 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:24:34 compute-1 ceph-mon[80135]: pgmap v1299: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:24:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:34.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:24:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:36.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:24:36 compute-1 ceph-mon[80135]: pgmap v1300: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:24:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:36.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:24:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:24:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:38.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:24:38 compute-1 ceph-mon[80135]: pgmap v1301: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:24:38 compute-1 nova_compute[230183]: 2025-11-23 21:24:38.888 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:24:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:24:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:38.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:24:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:40.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:40 compute-1 ceph-mon[80135]: pgmap v1302: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:24:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:24:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:40.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:24:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:42.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:24:42 compute-1 ceph-mon[80135]: pgmap v1303: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:24:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:42.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:43 compute-1 nova_compute[230183]: 2025-11-23 21:24:43.889 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:24:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:24:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:44.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:24:44 compute-1 ceph-mon[80135]: pgmap v1304: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:24:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:24:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:44.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:24:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:46.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:46 compute-1 ceph-mon[80135]: pgmap v1305: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:24:46 compute-1 podman[254195]: 2025-11-23 21:24:46.64622537 +0000 UTC m=+0.061630545 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 23 21:24:46 compute-1 podman[254194]: 2025-11-23 21:24:46.704063952 +0000 UTC m=+0.113878118 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 23 21:24:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:46.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:24:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:48.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:48 compute-1 ceph-mon[80135]: pgmap v1306: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:24:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:24:48 compute-1 nova_compute[230183]: 2025-11-23 21:24:48.892 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:24:48 compute-1 nova_compute[230183]: 2025-11-23 21:24:48.893 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:24:48 compute-1 nova_compute[230183]: 2025-11-23 21:24:48.894 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 23 21:24:48 compute-1 nova_compute[230183]: 2025-11-23 21:24:48.894 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:24:48 compute-1 nova_compute[230183]: 2025-11-23 21:24:48.930 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:24:48 compute-1 nova_compute[230183]: 2025-11-23 21:24:48.931 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:24:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:48.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:49 compute-1 podman[254240]: 2025-11-23 21:24:49.678542655 +0000 UTC m=+0.088944714 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 23 21:24:50 compute-1 sudo[254263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:24:50 compute-1 sudo[254263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:24:50 compute-1 sudo[254263]: pam_unix(sudo:session): session closed for user root
Nov 23 21:24:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:50.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:50 compute-1 ceph-mon[80135]: pgmap v1307: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:24:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:50.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:24:51.085 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:24:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:24:51.086 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:24:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:24:51.086 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:24:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:52.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:52 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:24:52 compute-1 ceph-mon[80135]: pgmap v1308: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:24:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:52.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:53 compute-1 nova_compute[230183]: 2025-11-23 21:24:53.931 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:24:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:54.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:54 compute-1 ceph-mon[80135]: pgmap v1309: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:24:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:54.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:24:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:56.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:24:56 compute-1 ceph-mon[80135]: pgmap v1310: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0.
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.684531) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933096684574, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1636, "num_deletes": 251, "total_data_size": 4102582, "memory_usage": 4151296, "flush_reason": "Manual Compaction"}
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933096711618, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 2679315, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37912, "largest_seqno": 39543, "table_properties": {"data_size": 2672517, "index_size": 3933, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14367, "raw_average_key_size": 20, "raw_value_size": 2658824, "raw_average_value_size": 3713, "num_data_blocks": 171, "num_entries": 716, "num_filter_entries": 716, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763932950, "oldest_key_time": 1763932950, "file_creation_time": 1763933096, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 27128 microseconds, and 10271 cpu microseconds.
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.711659) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 2679315 bytes OK
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.711677) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.712738) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.712750) EVENT_LOG_v1 {"time_micros": 1763933096712746, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.712768) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 4095127, prev total WAL file size 4095127, number of live WAL files 2.
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.713728) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(2616KB)], [72(11MB)]
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933096713798, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 15136745, "oldest_snapshot_seqno": -1}
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 6709 keys, 12987279 bytes, temperature: kUnknown
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933096801780, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 12987279, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12945252, "index_size": 24123, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16837, "raw_key_size": 176383, "raw_average_key_size": 26, "raw_value_size": 12827118, "raw_average_value_size": 1911, "num_data_blocks": 946, "num_entries": 6709, "num_filter_entries": 6709, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763933096, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.802307) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 12987279 bytes
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.805342) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.6 rd, 147.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 11.9 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(10.5) write-amplify(4.8) OK, records in: 7225, records dropped: 516 output_compression: NoCompression
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.805376) EVENT_LOG_v1 {"time_micros": 1763933096805359, "job": 44, "event": "compaction_finished", "compaction_time_micros": 88213, "compaction_time_cpu_micros": 36118, "output_level": 6, "num_output_files": 1, "total_output_size": 12987279, "num_input_records": 7225, "num_output_records": 6709, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933096806487, "job": 44, "event": "table_file_deletion", "file_number": 74}
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933096810536, "job": 44, "event": "table_file_deletion", "file_number": 72}
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.713494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.810603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.810608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.810610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.810611) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:24:56 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.810612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:24:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:24:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:56.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:24:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:24:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:58.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:24:58 compute-1 ceph-mon[80135]: pgmap v1311: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:24:58 compute-1 nova_compute[230183]: 2025-11-23 21:24:58.933 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:24:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:24:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:24:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:58.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:00.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:00 compute-1 ceph-mon[80135]: pgmap v1312: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:25:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:25:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:00.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:25:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:02.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:25:02 compute-1 ceph-mon[80135]: pgmap v1313: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:25:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:25:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:02.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:25:03 compute-1 nova_compute[230183]: 2025-11-23 21:25:03.933 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:25:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:04.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:04 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:25:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:04.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:05 compute-1 ceph-mon[80135]: pgmap v1314: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:25:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:06.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:06 compute-1 ceph-mon[80135]: pgmap v1315: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:25:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:25:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:06.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:25:07 compute-1 nova_compute[230183]: 2025-11-23 21:25:07.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:25:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:25:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:25:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:08.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:25:08 compute-1 ceph-mon[80135]: pgmap v1316: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:25:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/4110331943' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:25:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/4110331943' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:25:08 compute-1 sudo[254297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:25:08 compute-1 sudo[254297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:25:08 compute-1 sudo[254297]: pam_unix(sudo:session): session closed for user root
Nov 23 21:25:08 compute-1 sudo[254322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 21:25:08 compute-1 sudo[254322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:25:08 compute-1 nova_compute[230183]: 2025-11-23 21:25:08.934 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:25:08 compute-1 nova_compute[230183]: 2025-11-23 21:25:08.937 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:25:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:25:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:08.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:25:09 compute-1 sudo[254322]: pam_unix(sudo:session): session closed for user root
Nov 23 21:25:09 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 23 21:25:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:10.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:10 compute-1 sudo[254379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:25:10 compute-1 sudo[254379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:25:10 compute-1 sudo[254379]: pam_unix(sudo:session): session closed for user root
Nov 23 21:25:10 compute-1 ceph-mon[80135]: pgmap v1317: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:25:10 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:25:10 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 21:25:10 compute-1 ceph-mon[80135]: pgmap v1318: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 21:25:10 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:25:10 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:25:10 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 21:25:10 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 21:25:10 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:25:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:25:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:10.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:25:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:12.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:12 compute-1 nova_compute[230183]: 2025-11-23 21:25:12.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:25:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:25:12 compute-1 nova_compute[230183]: 2025-11-23 21:25:12.447 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:25:12 compute-1 nova_compute[230183]: 2025-11-23 21:25:12.448 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:25:12 compute-1 nova_compute[230183]: 2025-11-23 21:25:12.448 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:25:12 compute-1 nova_compute[230183]: 2025-11-23 21:25:12.448 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:25:12 compute-1 nova_compute[230183]: 2025-11-23 21:25:12.448 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:25:12 compute-1 ceph-mon[80135]: pgmap v1319: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 21:25:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:25:12 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2706958688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:25:12 compute-1 nova_compute[230183]: 2025-11-23 21:25:12.914 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:25:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:12.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:13 compute-1 nova_compute[230183]: 2025-11-23 21:25:13.064 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:25:13 compute-1 nova_compute[230183]: 2025-11-23 21:25:13.065 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4857MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:25:13 compute-1 nova_compute[230183]: 2025-11-23 21:25:13.066 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:25:13 compute-1 nova_compute[230183]: 2025-11-23 21:25:13.066 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:25:13 compute-1 nova_compute[230183]: 2025-11-23 21:25:13.118 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:25:13 compute-1 nova_compute[230183]: 2025-11-23 21:25:13.118 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:25:13 compute-1 nova_compute[230183]: 2025-11-23 21:25:13.131 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:25:13 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:25:13 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1955916470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:25:13 compute-1 nova_compute[230183]: 2025-11-23 21:25:13.567 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:25:13 compute-1 nova_compute[230183]: 2025-11-23 21:25:13.576 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:25:13 compute-1 nova_compute[230183]: 2025-11-23 21:25:13.592 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:25:13 compute-1 nova_compute[230183]: 2025-11-23 21:25:13.595 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 23 21:25:13 compute-1 nova_compute[230183]: 2025-11-23 21:25:13.596 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:25:13 compute-1 nova_compute[230183]: 2025-11-23 21:25:13.937 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:25:13 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2706958688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:25:13 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1955916470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:25:13 compute-1 ceph-mon[80135]: pgmap v1320: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 21:25:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:25:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:14.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:25:14 compute-1 nova_compute[230183]: 2025-11-23 21:25:14.597 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:25:14 compute-1 nova_compute[230183]: 2025-11-23 21:25:14.597 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:25:14 compute-1 nova_compute[230183]: 2025-11-23 21:25:14.597 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:25:14 compute-1 nova_compute[230183]: 2025-11-23 21:25:14.598 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 23 21:25:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:14.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:25:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:16.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:25:16 compute-1 nova_compute[230183]: 2025-11-23 21:25:16.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:25:16 compute-1 nova_compute[230183]: 2025-11-23 21:25:16.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 23 21:25:16 compute-1 nova_compute[230183]: 2025-11-23 21:25:16.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 23 21:25:16 compute-1 nova_compute[230183]: 2025-11-23 21:25:16.439 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 23 21:25:16 compute-1 nova_compute[230183]: 2025-11-23 21:25:16.439 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:25:16 compute-1 ceph-mon[80135]: pgmap v1321: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 21:25:16 compute-1 sudo[254451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 21:25:16 compute-1 sudo[254451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:25:16 compute-1 sudo[254451]: pam_unix(sudo:session): session closed for user root
Nov 23 21:25:16 compute-1 podman[254476]: 2025-11-23 21:25:16.973718603 +0000 UTC m=+0.049586743 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 23 21:25:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:25:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:16.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:25:17 compute-1 podman[254475]: 2025-11-23 21:25:17.009702974 +0000 UTC m=+0.086877969 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 21:25:17 compute-1 nova_compute[230183]: 2025-11-23 21:25:17.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:25:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:25:17 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:25:17 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:25:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:25:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:18.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:25:18 compute-1 nova_compute[230183]: 2025-11-23 21:25:18.423 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:25:18 compute-1 ceph-mon[80135]: pgmap v1322: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 21:25:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:25:18 compute-1 nova_compute[230183]: 2025-11-23 21:25:18.939 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:25:18 compute-1 sshd-session[254522]: Invalid user dogecoin from 92.118.39.92 port 55370
Nov 23 21:25:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:18.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:19 compute-1 sshd-session[254522]: Connection closed by invalid user dogecoin 92.118.39.92 port 55370 [preauth]
Nov 23 21:25:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.002000052s ======
Nov 23 21:25:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:20.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Nov 23 21:25:20 compute-1 podman[254525]: 2025-11-23 21:25:20.233542237 +0000 UTC m=+0.059427537 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 21:25:20 compute-1 ceph-mon[80135]: pgmap v1323: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 21:25:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:25:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:21.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:25:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:22.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:22 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:25:22 compute-1 ceph-mon[80135]: pgmap v1324: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:25:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:23.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:23 compute-1 nova_compute[230183]: 2025-11-23 21:25:23.941 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:25:23 compute-1 ceph-mon[80135]: pgmap v1325: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:25:23 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/744591241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:25:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:25:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:24.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:25:25 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3439106909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:25:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:25:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:25.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:25:26 compute-1 ceph-mon[80135]: pgmap v1326: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:25:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:26.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:25:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:27.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:25:27 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:25:27 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2030739848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:25:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:28.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:28 compute-1 ceph-mon[80135]: pgmap v1327: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:25:28 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/946564833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:25:28 compute-1 nova_compute[230183]: 2025-11-23 21:25:28.943 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:25:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:29.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:30.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:30 compute-1 sudo[254551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:25:30 compute-1 sudo[254551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:25:30 compute-1 sudo[254551]: pam_unix(sudo:session): session closed for user root
Nov 23 21:25:30 compute-1 ceph-mon[80135]: pgmap v1328: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:25:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:31.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:32.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:32 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:25:32 compute-1 ceph-mon[80135]: pgmap v1329: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:25:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:33.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:25:33 compute-1 nova_compute[230183]: 2025-11-23 21:25:33.945 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:25:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:34.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:34 compute-1 ceph-mon[80135]: pgmap v1330: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:25:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:35.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:36.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:36 compute-1 ceph-mon[80135]: pgmap v1331: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:25:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:25:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:37.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:25:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:25:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:25:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:38.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:25:38 compute-1 ceph-mon[80135]: pgmap v1332: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:25:38 compute-1 nova_compute[230183]: 2025-11-23 21:25:38.947 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:25:38 compute-1 nova_compute[230183]: 2025-11-23 21:25:38.948 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:25:38 compute-1 nova_compute[230183]: 2025-11-23 21:25:38.948 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 23 21:25:38 compute-1 nova_compute[230183]: 2025-11-23 21:25:38.949 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:25:38 compute-1 nova_compute[230183]: 2025-11-23 21:25:38.949 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:25:38 compute-1 nova_compute[230183]: 2025-11-23 21:25:38.951 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:25:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:25:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:39.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:25:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:40.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:40 compute-1 ceph-mon[80135]: pgmap v1333: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:25:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:25:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:41.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:25:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:25:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:42.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:25:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:25:42 compute-1 ceph-mon[80135]: pgmap v1334: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:25:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:25:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:43.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:25:43 compute-1 nova_compute[230183]: 2025-11-23 21:25:43.952 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:25:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:44.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:44 compute-1 ceph-mon[80135]: pgmap v1335: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:25:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:25:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:45.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:25:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:46.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:46 compute-1 ceph-mon[80135]: pgmap v1336: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:25:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:25:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:47.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:25:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:25:47 compute-1 podman[254585]: 2025-11-23 21:25:47.671505896 +0000 UTC m=+0.071524639 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 23 21:25:47 compute-1 podman[254584]: 2025-11-23 21:25:47.756915394 +0000 UTC m=+0.154636125 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 21:25:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:48.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:48 compute-1 ceph-mon[80135]: pgmap v1337: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:25:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:25:48 compute-1 nova_compute[230183]: 2025-11-23 21:25:48.953 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:25:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:49.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:50.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:50 compute-1 sudo[254631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:25:50 compute-1 sudo[254631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:25:50 compute-1 sudo[254631]: pam_unix(sudo:session): session closed for user root
Nov 23 21:25:50 compute-1 podman[254655]: 2025-11-23 21:25:50.600788312 +0000 UTC m=+0.095705084 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Nov 23 21:25:50 compute-1 ceph-mon[80135]: pgmap v1338: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:25:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:25:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:51.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:25:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:25:51.087 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:25:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:25:51.087 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:25:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:25:51.087 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:25:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:52.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:52 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:25:52 compute-1 ceph-mon[80135]: pgmap v1339: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:25:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:53.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:53 compute-1 nova_compute[230183]: 2025-11-23 21:25:53.956 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:25:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:54.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:54 compute-1 ceph-mon[80135]: pgmap v1340: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:25:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:55.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:56.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:56 compute-1 ceph-mon[80135]: pgmap v1341: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:25:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:57.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:25:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:25:57 compute-1 ceph-mon[80135]: pgmap v1342: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:25:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:25:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:58.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:25:58 compute-1 nova_compute[230183]: 2025-11-23 21:25:58.957 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:25:58 compute-1 nova_compute[230183]: 2025-11-23 21:25:58.958 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:25:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:25:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:25:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:59.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:26:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:00.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:26:00 compute-1 ceph-mon[80135]: pgmap v1343: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:26:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:01.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:02.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:26:02 compute-1 ceph-mon[80135]: pgmap v1344: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:26:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:03.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:26:03 compute-1 nova_compute[230183]: 2025-11-23 21:26:03.959 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:26:03 compute-1 nova_compute[230183]: 2025-11-23 21:26:03.960 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:26:03 compute-1 nova_compute[230183]: 2025-11-23 21:26:03.960 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 23 21:26:03 compute-1 nova_compute[230183]: 2025-11-23 21:26:03.960 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:26:03 compute-1 nova_compute[230183]: 2025-11-23 21:26:03.961 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:26:03 compute-1 nova_compute[230183]: 2025-11-23 21:26:03.962 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:26:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:04.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:04 compute-1 ceph-mon[80135]: pgmap v1345: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:26:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:05.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:06.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:06 compute-1 ceph-mon[80135]: pgmap v1346: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:26:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:07.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:26:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:08.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:08 compute-1 nova_compute[230183]: 2025-11-23 21:26:08.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:26:08 compute-1 ceph-mon[80135]: pgmap v1347: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:26:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/579471618' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:26:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/579471618' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:26:08 compute-1 nova_compute[230183]: 2025-11-23 21:26:08.963 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:26:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:09.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:26:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:10.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:26:10 compute-1 sudo[254686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:26:10 compute-1 sudo[254686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:26:10 compute-1 sudo[254686]: pam_unix(sudo:session): session closed for user root
Nov 23 21:26:10 compute-1 ceph-mon[80135]: pgmap v1348: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:26:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:11.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:12.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:26:12 compute-1 ceph-mon[80135]: pgmap v1349: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:26:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:13.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:13 compute-1 nova_compute[230183]: 2025-11-23 21:26:13.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:26:13 compute-1 nova_compute[230183]: 2025-11-23 21:26:13.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:26:13 compute-1 nova_compute[230183]: 2025-11-23 21:26:13.455 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:26:13 compute-1 nova_compute[230183]: 2025-11-23 21:26:13.455 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:26:13 compute-1 nova_compute[230183]: 2025-11-23 21:26:13.455 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:26:13 compute-1 nova_compute[230183]: 2025-11-23 21:26:13.456 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:26:13 compute-1 nova_compute[230183]: 2025-11-23 21:26:13.456 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:26:13 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:26:13 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3287437788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:26:13 compute-1 nova_compute[230183]: 2025-11-23 21:26:13.879 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:26:13 compute-1 nova_compute[230183]: 2025-11-23 21:26:13.963 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:26:14 compute-1 nova_compute[230183]: 2025-11-23 21:26:14.037 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:26:14 compute-1 nova_compute[230183]: 2025-11-23 21:26:14.038 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4871MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:26:14 compute-1 nova_compute[230183]: 2025-11-23 21:26:14.038 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:26:14 compute-1 nova_compute[230183]: 2025-11-23 21:26:14.039 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:26:14 compute-1 nova_compute[230183]: 2025-11-23 21:26:14.119 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:26:14 compute-1 nova_compute[230183]: 2025-11-23 21:26:14.120 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:26:14 compute-1 nova_compute[230183]: 2025-11-23 21:26:14.164 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:26:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:14.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:14 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:26:14 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/983736605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:26:14 compute-1 nova_compute[230183]: 2025-11-23 21:26:14.615 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:26:14 compute-1 nova_compute[230183]: 2025-11-23 21:26:14.620 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:26:14 compute-1 nova_compute[230183]: 2025-11-23 21:26:14.635 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:26:14 compute-1 nova_compute[230183]: 2025-11-23 21:26:14.636 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 23 21:26:14 compute-1 nova_compute[230183]: 2025-11-23 21:26:14.636 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:26:14 compute-1 ceph-mon[80135]: pgmap v1350: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:26:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3287437788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:26:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/983736605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:26:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:15.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:15 compute-1 nova_compute[230183]: 2025-11-23 21:26:15.636 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:26:15 compute-1 nova_compute[230183]: 2025-11-23 21:26:15.637 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:26:15 compute-1 nova_compute[230183]: 2025-11-23 21:26:15.637 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 23 21:26:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:16.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:16 compute-1 ceph-mon[80135]: pgmap v1351: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:26:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:17.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:17 compute-1 sudo[254758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:26:17 compute-1 sudo[254758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:26:17 compute-1 sudo[254758]: pam_unix(sudo:session): session closed for user root
Nov 23 21:26:17 compute-1 sudo[254783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 23 21:26:17 compute-1 sudo[254783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:26:17 compute-1 nova_compute[230183]: 2025-11-23 21:26:17.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:26:17 compute-1 nova_compute[230183]: 2025-11-23 21:26:17.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 23 21:26:17 compute-1 nova_compute[230183]: 2025-11-23 21:26:17.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 23 21:26:17 compute-1 nova_compute[230183]: 2025-11-23 21:26:17.442 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 23 21:26:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:26:17 compute-1 podman[254880]: 2025-11-23 21:26:17.724810625 +0000 UTC m=+0.053953150 container exec e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 21:26:17 compute-1 podman[254880]: 2025-11-23 21:26:17.809676188 +0000 UTC m=+0.138818633 container exec_died e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 23 21:26:17 compute-1 podman[254915]: 2025-11-23 21:26:17.954116451 +0000 UTC m=+0.060284639 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 23 21:26:18 compute-1 podman[254914]: 2025-11-23 21:26:18.011012119 +0000 UTC m=+0.122617212 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 21:26:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:26:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:18.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:26:18 compute-1 podman[255040]: 2025-11-23 21:26:18.41331909 +0000 UTC m=+0.138462115 container exec 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 21:26:18 compute-1 nova_compute[230183]: 2025-11-23 21:26:18.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:26:18 compute-1 podman[255040]: 2025-11-23 21:26:18.616112969 +0000 UTC m=+0.341256064 container exec_died 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 21:26:18 compute-1 ceph-mon[80135]: pgmap v1352: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:26:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:26:18 compute-1 nova_compute[230183]: 2025-11-23 21:26:18.966 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:26:18 compute-1 nova_compute[230183]: 2025-11-23 21:26:18.968 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:26:18 compute-1 nova_compute[230183]: 2025-11-23 21:26:18.969 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 23 21:26:18 compute-1 nova_compute[230183]: 2025-11-23 21:26:18.969 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:26:18 compute-1 nova_compute[230183]: 2025-11-23 21:26:18.996 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:26:18 compute-1 nova_compute[230183]: 2025-11-23 21:26:18.997 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:26:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:19.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:19 compute-1 podman[255179]: 2025-11-23 21:26:19.316196444 +0000 UTC m=+0.067106291 container exec 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei)
Nov 23 21:26:19 compute-1 podman[255179]: 2025-11-23 21:26:19.333370782 +0000 UTC m=+0.084280629 container exec_died 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei)
Nov 23 21:26:19 compute-1 nova_compute[230183]: 2025-11-23 21:26:19.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:26:19 compute-1 podman[255246]: 2025-11-23 21:26:19.525783115 +0000 UTC m=+0.044207851 container exec 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.buildah.version=1.28.2, release=1793, vcs-type=git, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, build-date=2023-02-22T09:23:20, name=keepalived, vendor=Red Hat, Inc.)
Nov 23 21:26:19 compute-1 podman[255246]: 2025-11-23 21:26:19.537133467 +0000 UTC m=+0.055558183 container exec_died 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc, io.openshift.expose-services=, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., distribution-scope=public, description=keepalived for Ceph, version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.buildah.version=1.28.2, release=1793, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph.)
Nov 23 21:26:19 compute-1 sudo[254783]: pam_unix(sudo:session): session closed for user root
Nov 23 21:26:19 compute-1 sudo[255280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:26:19 compute-1 sudo[255280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:26:19 compute-1 sudo[255280]: pam_unix(sudo:session): session closed for user root
Nov 23 21:26:19 compute-1 sudo[255306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 21:26:19 compute-1 sudo[255306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:26:20 compute-1 sudo[255306]: pam_unix(sudo:session): session closed for user root
Nov 23 21:26:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:20.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:20 compute-1 nova_compute[230183]: 2025-11-23 21:26:20.423 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:26:20 compute-1 ceph-mon[80135]: pgmap v1353: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:26:20 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:26:20 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:26:20 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 23 21:26:20 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:26:20 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:26:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:26:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:21.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:26:21 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 23 21:26:21 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:26:21 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 21:26:21 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:26:21 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:26:21 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 21:26:21 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 21:26:21 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:26:21 compute-1 podman[255363]: 2025-11-23 21:26:21.669038873 +0000 UTC m=+0.079750788 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 23 21:26:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:22.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:22 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:26:22 compute-1 ceph-mon[80135]: pgmap v1354: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 23 21:26:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:23.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:23 compute-1 nova_compute[230183]: 2025-11-23 21:26:23.996 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:26:24 compute-1 nova_compute[230183]: 2025-11-23 21:26:23.999 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:26:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:26:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:24.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:26:24 compute-1 ceph-mon[80135]: pgmap v1355: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 23 21:26:24 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2246533849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:26:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:25.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:25 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2088859336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:26:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:26.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:26 compute-1 sudo[255386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 21:26:26 compute-1 sudo[255386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:26:26 compute-1 sudo[255386]: pam_unix(sudo:session): session closed for user root
Nov 23 21:26:26 compute-1 ceph-mon[80135]: pgmap v1356: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:26:26 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:26:26 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:26:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:26:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:27.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:26:27 compute-1 nova_compute[230183]: 2025-11-23 21:26:27.422 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:26:27 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:26:27 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2081108293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:26:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:28.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:28 compute-1 ceph-mon[80135]: pgmap v1357: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 23 21:26:28 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1828028422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:26:29 compute-1 nova_compute[230183]: 2025-11-23 21:26:29.001 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:26:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:26:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:29.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:26:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:30.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:30 compute-1 sudo[255413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:26:30 compute-1 sudo[255413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:26:30 compute-1 sudo[255413]: pam_unix(sudo:session): session closed for user root
Nov 23 21:26:30 compute-1 ceph-mon[80135]: pgmap v1358: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 23 21:26:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:31.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:32.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:32 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:26:32 compute-1 ceph-mon[80135]: pgmap v1359: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 23 21:26:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:33.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:26:34 compute-1 nova_compute[230183]: 2025-11-23 21:26:34.002 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:26:34 compute-1 nova_compute[230183]: 2025-11-23 21:26:34.004 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:26:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:34.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:34 compute-1 ceph-mon[80135]: pgmap v1360: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:26:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:35.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:36.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:36 compute-1 ceph-mon[80135]: pgmap v1361: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:26:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:37.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:26:38 compute-1 ceph-mon[80135]: pgmap v1362: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:26:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:38.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:39 compute-1 nova_compute[230183]: 2025-11-23 21:26:39.004 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:26:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:39.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:40 compute-1 ceph-mon[80135]: pgmap v1363: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:26:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:40.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:41.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:42 compute-1 ceph-mon[80135]: pgmap v1364: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:26:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:26:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:42.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:26:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:26:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:43.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:43 compute-1 sshd-session[255444]: Invalid user solana from 161.35.133.66 port 46260
Nov 23 21:26:43 compute-1 sshd-session[255444]: Connection closed by invalid user solana 161.35.133.66 port 46260 [preauth]
Nov 23 21:26:44 compute-1 nova_compute[230183]: 2025-11-23 21:26:44.006 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:26:44 compute-1 nova_compute[230183]: 2025-11-23 21:26:44.009 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:26:44 compute-1 nova_compute[230183]: 2025-11-23 21:26:44.009 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 23 21:26:44 compute-1 nova_compute[230183]: 2025-11-23 21:26:44.009 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:26:44 compute-1 nova_compute[230183]: 2025-11-23 21:26:44.041 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:26:44 compute-1 nova_compute[230183]: 2025-11-23 21:26:44.042 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:26:44 compute-1 ceph-mon[80135]: pgmap v1365: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:26:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:26:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:44.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:26:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:26:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:45.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:26:46 compute-1 sshd-session[255448]: banner exchange: Connection from 159.89.12.166 port 55470: invalid format
Nov 23 21:26:46 compute-1 ceph-mon[80135]: pgmap v1366: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:26:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:46.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:46 compute-1 sshd-session[255449]: banner exchange: Connection from 159.89.12.166 port 55484: invalid format
Nov 23 21:26:47 compute-1 sshd-session[255450]: Connection reset by authenticating user root 159.89.12.166 port 55490 [preauth]
Nov 23 21:26:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:47.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:26:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:48.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:48 compute-1 ceph-mon[80135]: pgmap v1367: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:26:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:26:48 compute-1 podman[255454]: 2025-11-23 21:26:48.653425592 +0000 UTC m=+0.062139028 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 21:26:48 compute-1 podman[255453]: 2025-11-23 21:26:48.709647762 +0000 UTC m=+0.119364605 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 21:26:49 compute-1 nova_compute[230183]: 2025-11-23 21:26:49.042 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:26:49 compute-1 nova_compute[230183]: 2025-11-23 21:26:49.044 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:26:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:49.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:26:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:50.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:26:50 compute-1 ceph-mon[80135]: pgmap v1368: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:26:50 compute-1 sudo[255499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:26:50 compute-1 sudo[255499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:26:50 compute-1 sudo[255499]: pam_unix(sudo:session): session closed for user root
Nov 23 21:26:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:26:51.088 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:26:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:26:51.089 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:26:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:26:51.089 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:26:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:51.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:52.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:52 compute-1 ceph-mon[80135]: pgmap v1369: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:26:52 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:26:52 compute-1 podman[255525]: 2025-11-23 21:26:52.658673528 +0000 UTC m=+0.072618228 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 23 21:26:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:53.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:54 compute-1 nova_compute[230183]: 2025-11-23 21:26:54.043 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:26:54 compute-1 nova_compute[230183]: 2025-11-23 21:26:54.044 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:26:54 compute-1 nova_compute[230183]: 2025-11-23 21:26:54.044 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 23 21:26:54 compute-1 nova_compute[230183]: 2025-11-23 21:26:54.045 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:26:54 compute-1 nova_compute[230183]: 2025-11-23 21:26:54.045 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:26:54 compute-1 nova_compute[230183]: 2025-11-23 21:26:54.047 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:26:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:54.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:54 compute-1 ceph-mon[80135]: pgmap v1370: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:26:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:26:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:55.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:26:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:26:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:56.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:26:56 compute-1 ceph-mon[80135]: pgmap v1371: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:26:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:57.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:26:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:58.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:26:58 compute-1 ceph-mon[80135]: pgmap v1372: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:26:59 compute-1 nova_compute[230183]: 2025-11-23 21:26:59.047 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:26:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:26:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:26:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:59.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:27:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:00.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:27:00 compute-1 ceph-mon[80135]: pgmap v1373: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:27:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:01.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:27:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:02.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:27:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:27:02 compute-1 ceph-mon[80135]: pgmap v1374: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:27:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:27:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:03.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:27:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:27:04 compute-1 nova_compute[230183]: 2025-11-23 21:27:04.049 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:27:04 compute-1 nova_compute[230183]: 2025-11-23 21:27:04.050 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:27:04 compute-1 nova_compute[230183]: 2025-11-23 21:27:04.050 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 23 21:27:04 compute-1 nova_compute[230183]: 2025-11-23 21:27:04.050 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:27:04 compute-1 nova_compute[230183]: 2025-11-23 21:27:04.051 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:27:04 compute-1 nova_compute[230183]: 2025-11-23 21:27:04.053 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:27:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:27:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:04.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:27:04 compute-1 ceph-mon[80135]: pgmap v1375: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:27:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:27:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:05.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:27:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:27:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:06.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:27:06 compute-1 ceph-mon[80135]: pgmap v1376: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:27:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 21:27:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:07.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 21:27:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:27:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:08.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:08 compute-1 ceph-mon[80135]: pgmap v1377: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:27:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/801547798' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:27:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/801547798' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:27:09 compute-1 nova_compute[230183]: 2025-11-23 21:27:09.053 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:27:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:27:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:09.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:27:10 compute-1 nova_compute[230183]: 2025-11-23 21:27:10.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:27:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:27:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:10.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:27:10 compute-1 ceph-mon[80135]: pgmap v1378: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:27:10 compute-1 sudo[255554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:27:10 compute-1 sudo[255554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:27:10 compute-1 sudo[255554]: pam_unix(sudo:session): session closed for user root
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0.
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.111071) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933231111168, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1593, "num_deletes": 255, "total_data_size": 4026113, "memory_usage": 4079280, "flush_reason": "Manual Compaction"}
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933231129016, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 2630519, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39548, "largest_seqno": 41136, "table_properties": {"data_size": 2623758, "index_size": 3896, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14204, "raw_average_key_size": 19, "raw_value_size": 2610121, "raw_average_value_size": 3660, "num_data_blocks": 168, "num_entries": 713, "num_filter_entries": 713, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763933097, "oldest_key_time": 1763933097, "file_creation_time": 1763933231, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 17938 microseconds, and 5610 cpu microseconds.
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.129061) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 2630519 bytes OK
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.129082) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.130849) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.130876) EVENT_LOG_v1 {"time_micros": 1763933231130858, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.130893) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 4018763, prev total WAL file size 4018763, number of live WAL files 2.
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.131696) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303034' seq:72057594037927935, type:22 .. '6C6F676D0031323535' seq:0, type:0; will stop at (end)
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(2568KB)], [75(12MB)]
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933231131720, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 15617798, "oldest_snapshot_seqno": -1}
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 6894 keys, 15452822 bytes, temperature: kUnknown
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933231203771, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 15452822, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15407049, "index_size": 27421, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17285, "raw_key_size": 181176, "raw_average_key_size": 26, "raw_value_size": 15283088, "raw_average_value_size": 2216, "num_data_blocks": 1083, "num_entries": 6894, "num_filter_entries": 6894, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763933231, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}}
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.204052) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 15452822 bytes
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.209276) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 216.4 rd, 214.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 12.4 +0.0 blob) out(14.7 +0.0 blob), read-write-amplify(11.8) write-amplify(5.9) OK, records in: 7422, records dropped: 528 output_compression: NoCompression
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.209296) EVENT_LOG_v1 {"time_micros": 1763933231209287, "job": 46, "event": "compaction_finished", "compaction_time_micros": 72172, "compaction_time_cpu_micros": 27394, "output_level": 6, "num_output_files": 1, "total_output_size": 15452822, "num_input_records": 7422, "num_output_records": 6894, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933231209894, "job": 46, "event": "table_file_deletion", "file_number": 77}
Nov 23 21:27:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:27:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:11.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933231212978, "job": 46, "event": "table_file_deletion", "file_number": 75}
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.131639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.213072) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.213078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.213080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.213082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:27:11 compute-1 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.213084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 21:27:12 compute-1 ceph-mon[80135]: pgmap v1379: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:27:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:27:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:12.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:27:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:27:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:13.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:14 compute-1 nova_compute[230183]: 2025-11-23 21:27:14.054 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:27:14 compute-1 nova_compute[230183]: 2025-11-23 21:27:14.056 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:27:14 compute-1 nova_compute[230183]: 2025-11-23 21:27:14.056 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 23 21:27:14 compute-1 nova_compute[230183]: 2025-11-23 21:27:14.056 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:27:14 compute-1 nova_compute[230183]: 2025-11-23 21:27:14.094 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:27:14 compute-1 nova_compute[230183]: 2025-11-23 21:27:14.094 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:27:14 compute-1 ceph-mon[80135]: pgmap v1380: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:27:14 compute-1 nova_compute[230183]: 2025-11-23 21:27:14.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:27:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:14.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:14 compute-1 nova_compute[230183]: 2025-11-23 21:27:14.453 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:27:14 compute-1 nova_compute[230183]: 2025-11-23 21:27:14.454 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:27:14 compute-1 nova_compute[230183]: 2025-11-23 21:27:14.454 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:27:14 compute-1 nova_compute[230183]: 2025-11-23 21:27:14.455 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:27:14 compute-1 nova_compute[230183]: 2025-11-23 21:27:14.455 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:27:14 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:27:14 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1023863718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:27:14 compute-1 nova_compute[230183]: 2025-11-23 21:27:14.926 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:27:15 compute-1 nova_compute[230183]: 2025-11-23 21:27:15.097 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:27:15 compute-1 nova_compute[230183]: 2025-11-23 21:27:15.098 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4862MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:27:15 compute-1 nova_compute[230183]: 2025-11-23 21:27:15.099 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:27:15 compute-1 nova_compute[230183]: 2025-11-23 21:27:15.099 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:27:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:15.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:15 compute-1 nova_compute[230183]: 2025-11-23 21:27:15.296 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:27:15 compute-1 nova_compute[230183]: 2025-11-23 21:27:15.297 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:27:15 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1023863718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:27:15 compute-1 nova_compute[230183]: 2025-11-23 21:27:15.452 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing inventories for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 23 21:27:15 compute-1 nova_compute[230183]: 2025-11-23 21:27:15.543 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updating ProviderTree inventory for provider bb217351-d4c8-44a4-9137-08393a1f72bc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 23 21:27:15 compute-1 nova_compute[230183]: 2025-11-23 21:27:15.544 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updating inventory in ProviderTree for provider bb217351-d4c8-44a4-9137-08393a1f72bc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 23 21:27:15 compute-1 nova_compute[230183]: 2025-11-23 21:27:15.560 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing aggregate associations for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 23 21:27:15 compute-1 nova_compute[230183]: 2025-11-23 21:27:15.590 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing trait associations for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc, traits: COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,HW_CPU_X86_AVX,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SVM,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 23 21:27:15 compute-1 nova_compute[230183]: 2025-11-23 21:27:15.606 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:27:16 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:27:16 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/258649622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:27:16 compute-1 nova_compute[230183]: 2025-11-23 21:27:16.087 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:27:16 compute-1 nova_compute[230183]: 2025-11-23 21:27:16.091 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:27:16 compute-1 nova_compute[230183]: 2025-11-23 21:27:16.108 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:27:16 compute-1 nova_compute[230183]: 2025-11-23 21:27:16.110 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 23 21:27:16 compute-1 nova_compute[230183]: 2025-11-23 21:27:16.110 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:27:16 compute-1 nova_compute[230183]: 2025-11-23 21:27:16.110 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:27:16 compute-1 ceph-mon[80135]: pgmap v1381: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:27:16 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/258649622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:27:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:16.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:17 compute-1 nova_compute[230183]: 2025-11-23 21:27:17.122 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:27:17 compute-1 nova_compute[230183]: 2025-11-23 21:27:17.123 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:27:17 compute-1 nova_compute[230183]: 2025-11-23 21:27:17.123 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 23 21:27:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:27:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:17.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:27:17 compute-1 nova_compute[230183]: 2025-11-23 21:27:17.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:27:17 compute-1 nova_compute[230183]: 2025-11-23 21:27:17.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 23 21:27:17 compute-1 nova_compute[230183]: 2025-11-23 21:27:17.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 23 21:27:17 compute-1 nova_compute[230183]: 2025-11-23 21:27:17.448 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 23 21:27:17 compute-1 nova_compute[230183]: 2025-11-23 21:27:17.449 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:27:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:27:18 compute-1 ceph-mon[80135]: pgmap v1382: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:27:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:27:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:18.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:19 compute-1 nova_compute[230183]: 2025-11-23 21:27:19.095 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:27:19 compute-1 nova_compute[230183]: 2025-11-23 21:27:19.097 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:27:19 compute-1 nova_compute[230183]: 2025-11-23 21:27:19.098 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 23 21:27:19 compute-1 nova_compute[230183]: 2025-11-23 21:27:19.098 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:27:19 compute-1 nova_compute[230183]: 2025-11-23 21:27:19.128 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:27:19 compute-1 nova_compute[230183]: 2025-11-23 21:27:19.130 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:27:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:19.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:19 compute-1 nova_compute[230183]: 2025-11-23 21:27:19.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:27:19 compute-1 nova_compute[230183]: 2025-11-23 21:27:19.428 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:27:19 compute-1 nova_compute[230183]: 2025-11-23 21:27:19.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 23 21:27:19 compute-1 podman[255628]: 2025-11-23 21:27:19.693206635 +0000 UTC m=+0.089991271 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 21:27:19 compute-1 podman[255627]: 2025-11-23 21:27:19.745145751 +0000 UTC m=+0.146546780 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 23 21:27:20 compute-1 ceph-mon[80135]: pgmap v1383: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:27:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:20.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:20 compute-1 nova_compute[230183]: 2025-11-23 21:27:20.443 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:27:20 compute-1 nova_compute[230183]: 2025-11-23 21:27:20.444 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 23 21:27:20 compute-1 nova_compute[230183]: 2025-11-23 21:27:20.460 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 23 21:27:21 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:21 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:27:21 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:21.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:27:21 compute-1 nova_compute[230183]: 2025-11-23 21:27:21.444 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:27:22 compute-1 nova_compute[230183]: 2025-11-23 21:27:22.423 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:27:22 compute-1 ceph-mon[80135]: pgmap v1384: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:27:22 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:22 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:22 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:22.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:22 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:27:23 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:23 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:27:23 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:23.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:27:23 compute-1 podman[255674]: 2025-11-23 21:27:23.627696115 +0000 UTC m=+0.045189077 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 21:27:24 compute-1 nova_compute[230183]: 2025-11-23 21:27:24.131 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:27:24 compute-1 nova_compute[230183]: 2025-11-23 21:27:24.133 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:27:24 compute-1 nova_compute[230183]: 2025-11-23 21:27:24.133 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 23 21:27:24 compute-1 nova_compute[230183]: 2025-11-23 21:27:24.133 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:27:24 compute-1 nova_compute[230183]: 2025-11-23 21:27:24.182 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:27:24 compute-1 nova_compute[230183]: 2025-11-23 21:27:24.182 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:27:24 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:24 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:24 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:24.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:24 compute-1 ceph-mon[80135]: pgmap v1385: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:27:25 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:25 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:25 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:25.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:26 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:26 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:26 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:26.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:26 compute-1 ceph-mon[80135]: pgmap v1386: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:27:26 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1904251977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:27:26 compute-1 sudo[255696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 23 21:27:26 compute-1 sudo[255696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:27:26 compute-1 sudo[255696]: pam_unix(sudo:session): session closed for user root
Nov 23 21:27:26 compute-1 sudo[255721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 23 21:27:26 compute-1 sudo[255721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:27:27 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:27 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:27:27 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:27.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:27:27 compute-1 sudo[255721]: pam_unix(sudo:session): session closed for user root
Nov 23 21:27:27 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:27:27 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/882900723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:27:28 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:28 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:27:28 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:28.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:27:28 compute-1 ceph-mon[80135]: pgmap v1387: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:27:28 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2428959963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:27:29 compute-1 nova_compute[230183]: 2025-11-23 21:27:29.183 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:27:29 compute-1 nova_compute[230183]: 2025-11-23 21:27:29.185 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:27:29 compute-1 nova_compute[230183]: 2025-11-23 21:27:29.185 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 23 21:27:29 compute-1 nova_compute[230183]: 2025-11-23 21:27:29.185 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:27:29 compute-1 nova_compute[230183]: 2025-11-23 21:27:29.221 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:27:29 compute-1 nova_compute[230183]: 2025-11-23 21:27:29.222 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:27:29 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:29 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:29 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:29.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:29 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2645231075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:27:30 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:30 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:30 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:30.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:30 compute-1 ceph-mon[80135]: pgmap v1388: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:27:30 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:27:30 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:27:31 compute-1 sudo[255780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:27:31 compute-1 sudo[255780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:27:31 compute-1 sudo[255780]: pam_unix(sudo:session): session closed for user root
Nov 23 21:27:31 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:31 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:31 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:31.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:31 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:27:31 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 21:27:31 compute-1 ceph-mon[80135]: pgmap v1389: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1 op/s
Nov 23 21:27:31 compute-1 ceph-mon[80135]: pgmap v1390: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Nov 23 21:27:31 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:27:31 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:27:31 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 23 21:27:31 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 21:27:31 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:27:32 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:32 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:32 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:32.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:32 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:27:33 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:33 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:33 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:33.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:33 compute-1 ceph-mon[80135]: pgmap v1391: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Nov 23 21:27:33 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:27:34 compute-1 nova_compute[230183]: 2025-11-23 21:27:34.223 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:27:34 compute-1 nova_compute[230183]: 2025-11-23 21:27:34.224 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:27:34 compute-1 nova_compute[230183]: 2025-11-23 21:27:34.224 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 23 21:27:34 compute-1 nova_compute[230183]: 2025-11-23 21:27:34.224 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:27:34 compute-1 nova_compute[230183]: 2025-11-23 21:27:34.260 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:27:34 compute-1 nova_compute[230183]: 2025-11-23 21:27:34.260 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:27:34 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:34 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:34 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:34.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:35 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:35 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:35 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:35.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:35 compute-1 sudo[255808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 23 21:27:35 compute-1 sudo[255808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:27:35 compute-1 sudo[255808]: pam_unix(sudo:session): session closed for user root
Nov 23 21:27:35 compute-1 ceph-mon[80135]: pgmap v1392: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Nov 23 21:27:35 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:27:35 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 21:27:36 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:36 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:27:36 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:36.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:27:36 compute-1 ceph-mon[80135]: pgmap v1393: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Nov 23 21:27:37 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:37 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:27:37 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:37.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:27:37 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:27:38 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:38 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:38 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:38.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:39 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:39 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:39 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:39.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:39 compute-1 nova_compute[230183]: 2025-11-23 21:27:39.294 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:27:39 compute-1 nova_compute[230183]: 2025-11-23 21:27:39.296 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:27:39 compute-1 nova_compute[230183]: 2025-11-23 21:27:39.296 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5035 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 23 21:27:39 compute-1 nova_compute[230183]: 2025-11-23 21:27:39.296 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:27:39 compute-1 nova_compute[230183]: 2025-11-23 21:27:39.296 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:27:39 compute-1 ceph-mon[80135]: pgmap v1394: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Nov 23 21:27:40 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:40 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:40 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:40.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:41 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:41 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:41 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:41.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:41 compute-1 ceph-mon[80135]: pgmap v1395: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 23 21:27:42 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:42 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:27:42 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:27:42 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:42.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:27:43 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:43 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:27:43 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:43.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:27:43 compute-1 ceph-mon[80135]: pgmap v1396: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:27:44 compute-1 nova_compute[230183]: 2025-11-23 21:27:44.298 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:27:44 compute-1 nova_compute[230183]: 2025-11-23 21:27:44.300 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:27:44 compute-1 nova_compute[230183]: 2025-11-23 21:27:44.300 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 23 21:27:44 compute-1 nova_compute[230183]: 2025-11-23 21:27:44.300 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:27:44 compute-1 nova_compute[230183]: 2025-11-23 21:27:44.332 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:27:44 compute-1 nova_compute[230183]: 2025-11-23 21:27:44.332 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:27:44 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:44 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:44 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:44.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:45 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:45 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:45 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:45.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:45 compute-1 ceph-mon[80135]: pgmap v1397: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:27:46 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:46 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:46 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:46.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:47 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:47 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:27:47 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:47.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:27:47 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:27:47 compute-1 ceph-mon[80135]: pgmap v1398: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:27:48 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:48 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:48 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:48.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:48 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:27:48 compute-1 ceph-mon[80135]: pgmap v1399: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:27:49 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:49 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:27:49 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:49.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:27:49 compute-1 nova_compute[230183]: 2025-11-23 21:27:49.333 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:27:49 compute-1 nova_compute[230183]: 2025-11-23 21:27:49.335 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:27:50 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:50 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:27:50 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:50.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:27:50 compute-1 podman[255841]: 2025-11-23 21:27:50.674433526 +0000 UTC m=+0.074849518 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 23 21:27:50 compute-1 podman[255840]: 2025-11-23 21:27:50.698178229 +0000 UTC m=+0.106985105 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 23 21:27:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:27:51.090 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:27:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:27:51.090 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:27:51 compute-1 ovn_metadata_agent[142153]: 2025-11-23 21:27:51.091 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:27:51 compute-1 sudo[255884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:27:51 compute-1 sudo[255884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:27:51 compute-1 sudo[255884]: pam_unix(sudo:session): session closed for user root
Nov 23 21:27:51 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:51 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:51 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:51.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:51 compute-1 ceph-mon[80135]: pgmap v1400: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:27:52 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:27:52 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:52 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:52 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:52.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:53 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:53 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:27:53 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:53.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:27:53 compute-1 ceph-mon[80135]: pgmap v1401: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:27:54 compute-1 sshd-session[255911]: Accepted publickey for zuul from 192.168.122.10 port 45314 ssh2: ECDSA SHA256:7LF3rB/846W//CS4OIcVKlH1BXQGVCcZuH+b9rjPyTo
Nov 23 21:27:54 compute-1 systemd-logind[793]: New session 58 of user zuul.
Nov 23 21:27:54 compute-1 systemd[1]: Started Session 58 of User zuul.
Nov 23 21:27:54 compute-1 sshd-session[255911]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 23 21:27:54 compute-1 podman[255913]: 2025-11-23 21:27:54.264364233 +0000 UTC m=+0.089144038 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 23 21:27:54 compute-1 sudo[255933]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 23 21:27:54 compute-1 sudo[255933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 23 21:27:54 compute-1 nova_compute[230183]: 2025-11-23 21:27:54.336 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:27:54 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:54 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:27:54 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:54.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:27:55 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:55 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:55 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:55.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:55 compute-1 ceph-mon[80135]: pgmap v1402: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:27:56 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:56 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:56 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:56.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:57 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:57 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:27:57 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:57.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:27:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:27:57 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 23 21:27:57 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1594982661' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 23 21:27:57 compute-1 ceph-mon[80135]: from='client.28037 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:27:57 compute-1 ceph-mon[80135]: pgmap v1403: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:27:57 compute-1 ceph-mon[80135]: from='client.18036 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:27:57 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1594982661' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 23 21:27:58 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:58 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:27:58 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:58.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:27:58 compute-1 ceph-mon[80135]: from='client.27001 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:27:58 compute-1 ceph-mon[80135]: from='client.28049 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:27:58 compute-1 ceph-mon[80135]: from='client.18051 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:27:58 compute-1 ceph-mon[80135]: from='client.27013 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:27:58 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1996954738' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 23 21:27:58 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2562450027' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 23 21:27:59 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:27:59 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:27:59 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:59.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:27:59 compute-1 nova_compute[230183]: 2025-11-23 21:27:59.339 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:27:59 compute-1 nova_compute[230183]: 2025-11-23 21:27:59.341 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:27:59 compute-1 nova_compute[230183]: 2025-11-23 21:27:59.341 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 23 21:27:59 compute-1 nova_compute[230183]: 2025-11-23 21:27:59.342 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:27:59 compute-1 nova_compute[230183]: 2025-11-23 21:27:59.377 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:27:59 compute-1 nova_compute[230183]: 2025-11-23 21:27:59.378 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:27:59 compute-1 ceph-mon[80135]: pgmap v1404: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:28:00 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:28:00 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:28:00 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:00.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:28:00 compute-1 ovs-vsctl[256257]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 23 21:28:00 compute-1 ceph-mon[80135]: pgmap v1405: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:28:01 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:28:01 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:28:01 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:01.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:28:01 compute-1 virtqemud[229705]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 23 21:28:01 compute-1 virtqemud[229705]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 23 21:28:01 compute-1 virtqemud[229705]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 23 21:28:02 compute-1 systemd[1]: Starting dnf makecache...
Nov 23 21:28:02 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: cache status {prefix=cache status} (starting...)
Nov 23 21:28:02 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 21:28:02 compute-1 dnf[256486]: Metadata cache refreshed recently.
Nov 23 21:28:02 compute-1 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 23 21:28:02 compute-1 systemd[1]: Finished dnf makecache.
Nov 23 21:28:02 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:28:02 compute-1 lvm[256590]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 21:28:02 compute-1 lvm[256590]: VG ceph_vg0 finished
Nov 23 21:28:02 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:28:02 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:28:02 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:02.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:28:02 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: client ls {prefix=client ls} (starting...)
Nov 23 21:28:02 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 21:28:03 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: damage ls {prefix=damage ls} (starting...)
Nov 23 21:28:03 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 21:28:03 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: dump loads {prefix=dump loads} (starting...)
Nov 23 21:28:03 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 21:28:03 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 23 21:28:03 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/13909160' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 23 21:28:03 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:28:03 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:28:03 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:03.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:28:03 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 23 21:28:03 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 21:28:03 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 23 21:28:03 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 21:28:03 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 23 21:28:03 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 21:28:03 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 21:28:03 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2469849677' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:28:03 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 23 21:28:03 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 21:28:03 compute-1 ceph-mon[80135]: pgmap v1406: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:28:03 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:28:03 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/13909160' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 23 21:28:03 compute-1 ceph-mon[80135]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 23 21:28:03 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2364723209' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 23 21:28:03 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2469849677' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:28:03 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2706595844' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:28:03 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 23 21:28:03 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 21:28:03 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Nov 23 21:28:03 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3352348249' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 23 21:28:04 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 23 21:28:04 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 21:28:04 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: ops {prefix=ops} (starting...)
Nov 23 21:28:04 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 21:28:04 compute-1 nova_compute[230183]: 2025-11-23 21:28:04.378 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:28:04 compute-1 nova_compute[230183]: 2025-11-23 21:28:04.380 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:28:04 compute-1 nova_compute[230183]: 2025-11-23 21:28:04.380 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 23 21:28:04 compute-1 nova_compute[230183]: 2025-11-23 21:28:04.381 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:28:04 compute-1 nova_compute[230183]: 2025-11-23 21:28:04.417 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:28:04 compute-1 nova_compute[230183]: 2025-11-23 21:28:04.417 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:28:04 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Nov 23 21:28:04 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3035369335' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 23 21:28:04 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:28:04 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:28:04 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:04.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:28:04 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Nov 23 21:28:04 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1769929749' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 23 21:28:04 compute-1 ceph-mon[80135]: from='client.28082 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:04 compute-1 ceph-mon[80135]: from='client.18090 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:04 compute-1 ceph-mon[80135]: from='client.28106 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:04 compute-1 ceph-mon[80135]: from='client.18102 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:04 compute-1 ceph-mon[80135]: from='client.28130 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:04 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3352348249' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 23 21:28:04 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3184255387' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 23 21:28:04 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3541442377' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 23 21:28:04 compute-1 ceph-mon[80135]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 23 21:28:04 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3035369335' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 23 21:28:04 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/750063702' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 23 21:28:04 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1769929749' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 23 21:28:04 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3636553393' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 21:28:05 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: session ls {prefix=session ls} (starting...)
Nov 23 21:28:05 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 21:28:05 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 23 21:28:05 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2933207601' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 23 21:28:05 compute-1 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: status {prefix=status} (starting...)
Nov 23 21:28:05 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:28:05 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:28:05 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:05.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:28:05 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 23 21:28:05 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4272356961' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 23 21:28:05 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Nov 23 21:28:05 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1053523333' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 23 21:28:05 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 23 21:28:05 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2899678984' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 23 21:28:05 compute-1 ceph-mon[80135]: from='client.18120 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:05 compute-1 ceph-mon[80135]: from='client.27031 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:05 compute-1 ceph-mon[80135]: from='client.28154 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:05 compute-1 ceph-mon[80135]: from='client.18153 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:05 compute-1 ceph-mon[80135]: from='client.27046 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:05 compute-1 ceph-mon[80135]: from='client.28196 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:05 compute-1 ceph-mon[80135]: pgmap v1407: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:28:05 compute-1 ceph-mon[80135]: from='client.27058 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:05 compute-1 ceph-mon[80135]: from='client.18183 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:05 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/967768942' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 23 21:28:05 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2933207601' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 23 21:28:05 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1787544456' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 23 21:28:05 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1673830274' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 23 21:28:05 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/654075312' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 23 21:28:05 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/4272356961' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 23 21:28:05 compute-1 ceph-mon[80135]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 23 21:28:05 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1053523333' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 23 21:28:05 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/694766729' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 23 21:28:05 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/4190709229' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 23 21:28:05 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2899678984' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 23 21:28:05 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 23 21:28:05 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3857694526' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 23 21:28:05 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Nov 23 21:28:05 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/246114743' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 23 21:28:06 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 23 21:28:06 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/893255180' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 21:28:06 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:28:06 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:28:06 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:06.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:28:06 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Nov 23 21:28:06 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1417216559' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 23 21:28:06 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Nov 23 21:28:06 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1263003622' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 23 21:28:06 compute-1 ceph-mon[80135]: from='client.28211 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:06 compute-1 ceph-mon[80135]: from='client.18201 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:06 compute-1 ceph-mon[80135]: from='client.27070 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3857694526' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 23 21:28:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/246114743' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 23 21:28:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2103171494' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 23 21:28:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/558271612' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 23 21:28:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3117868061' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 23 21:28:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/893255180' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 21:28:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2405757265' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 21:28:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3684320480' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 23 21:28:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1417216559' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 23 21:28:06 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1263003622' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 23 21:28:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 23 21:28:07 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2773208757' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 23 21:28:07 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:28:07 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:28:07 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:07.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:28:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Nov 23 21:28:07 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1133566738' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 23 21:28:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:28:07 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 23 21:28:07 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1004930768' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 23 21:28:07 compute-1 ceph-mon[80135]: from='client.27100 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:07 compute-1 ceph-mon[80135]: from='client.28274 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:07 compute-1 ceph-mon[80135]: from='client.27112 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:07 compute-1 ceph-mon[80135]: from='client.18264 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:07 compute-1 ceph-mon[80135]: pgmap v1408: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:28:07 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3225411219' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 23 21:28:07 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/212132359' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 23 21:28:07 compute-1 ceph-mon[80135]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 23 21:28:07 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3078253173' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 23 21:28:07 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1731456197' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 23 21:28:07 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2773208757' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 23 21:28:07 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1133566738' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 23 21:28:07 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1538479727' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 23 21:28:07 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/425593023' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 23 21:28:07 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2397719647' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 23 21:28:07 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3816304753' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 21:28:07 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1004930768' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 23 21:28:08 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 23 21:28:08 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1316302909' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 23 21:28:08 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:28:08 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:28:08 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:08.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:43.612127+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:44.612264+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:45.612406+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:46.612558+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:47.612724+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:48.612951+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:49.613129+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:50.613570+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9000 session 0x55805c452000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:51.613711+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:52.613843+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:53.613944+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:54.614072+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:55.614194+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:56.614314+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:57.614687+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:58.614860+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:55:59.615088+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:00.615287+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:01.615471+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 58.244842529s of 58.250808716s, submitted: 2
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:02.615656+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:03.615792+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988378 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:04.615910+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:05.616065+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:06.616222+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:07.616378+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:08.616552+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989890 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:09.616698+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:10.616938+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:11.617208+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:12.617434+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:13.617609+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989299 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:14.617759+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:15.617928+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:16.618122+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:17.618256+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.113847733s of 16.138629913s, submitted: 3
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:18.618389+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989167 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:19.618582+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:20.618950+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c400 session 0x55805c633860
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:21.619131+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:22.619296+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:23.619481+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989167 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:24.619608+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:25.619736+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:26.619904+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:27.620012+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:28.620157+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989167 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:29.620350+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:30.620496+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:31.620699+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805c7ff400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.863085747s of 13.866735458s, submitted: 1
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:32.620830+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:33.621047+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989299 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:34.621197+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:35.621337+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:36.621527+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:37.621650+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:38.621770+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990811 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:39.621924+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:40.622157+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:41.622327+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:42.622451+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:43.622578+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990220 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:44.622999+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:45.623665+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:46.623784+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.973569870s of 14.986274719s, submitted: 3
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805c7ff400 session 0x55805cc7e5a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:47.623949+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:48.624061+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805ac0a000 session 0x55805a7e6b40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a935800 session 0x55805d8adc20
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990088 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:49.624183+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:50.624357+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:51.624724+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:52.625058+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:53.625176+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990088 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:54.625340+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:55.625547+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:56.625787+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:57.625954+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a935800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.993670464s of 10.996788025s, submitted: 1
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:58.626105+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990220 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:56:59.626289+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:00.626418+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:01.626546+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:02.626692+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:03.626889+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805ac0a000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991864 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:04.627021+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:05.627133+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:06.627344+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:07.627490+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 1753088 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:08.627737+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 1753088 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.335764885s of 11.355053902s, submitted: 4
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992785 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:09.627966+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 1753088 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:10.628161+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 1753088 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:11.628304+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 1753088 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:12.628425+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:13.628542+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992653 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:14.628670+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:15.628803+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:16.628945+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:17.629179+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:18.629318+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:19.629480+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:20.629599+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:21.629819+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:22.629956+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:23.630123+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:24.630281+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:25.630579+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:26.630762+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:27.630981+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:28.631125+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:29.631344+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:30.631474+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:31.631643+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:32.631860+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:33.632030+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:34.632199+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:35.632410+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:36.632622+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:37.632857+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d6b7400 session 0x55805c452b40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a7f0000 session 0x55805d25f0e0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:38.633085+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:39.633255+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:40.633426+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:41.633621+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:42.633753+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:43.633929+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:44.634055+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:45.634197+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:46.634324+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:47.634507+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:48.634632+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:49.635197+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:50.635367+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:51.636027+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:52.636233+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d651c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 43.579681396s of 43.622188568s, submitted: 3
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:53.636917+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:54.637043+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992653 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:55.637317+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:56.637535+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:57.637656+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:58.637796+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d650800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:57:59.637944+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995677 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:00.638069+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:01.638189+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:02.638312+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:03.638458+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.178874969s of 11.839152336s, submitted: 5
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:04.638634+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994363 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:05.638849+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:06.639029+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:07.639199+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:08.639317+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:09.639467+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994363 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:10.639684+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:11.639891+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:12.640080+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:13.640283+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:14.640481+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994363 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:15.640613+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:16.640778+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:17.641022+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:18.641156+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:19.641361+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994363 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:20.641543+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:21.641701+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:22.641954+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805ac0a000 session 0x55805cd7cb40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a935800 session 0x55805d565c20
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:23.642120+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:24.642811+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994363 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:25.643091+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:26.643218+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:27.643362+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:28.643513+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:29.643887+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994363 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:30.644265+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:31.644413+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:32.644575+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:33.644720+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 29.315622330s of 29.319524765s, submitted: 1
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:34.644899+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994495 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d64ac00 session 0x55805cd1c960
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d64ac00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:35.645253+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:36.645575+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:37.645701+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:38.645957+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:39.646236+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a7f0000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999031 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:40.646468+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:41.646631+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:42.646844+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:43.646970+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:44.647127+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997849 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:45.647277+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:46.647401+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:47.647523+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.826273918s of 13.858831406s, submitted: 6
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:48.647727+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:49.647956+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997717 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:50.648080+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:51.648235+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:52.648429+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:53.648655+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:54.648831+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997717 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:55.649257+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:56.649413+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c400 session 0x55805d564f00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9000 session 0x55805b7d72c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:57.649814+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:58.649976+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:58:59.650169+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997717 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:00.650706+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:01.651061+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:02.651187+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:03.651309+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:04.651713+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997717 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:05.652081+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:06.652395+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:07.652519+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a935800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.386011124s of 20.388910294s, submitted: 1
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:08.652688+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:09.652837+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997849 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:10.652980+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:11.653128+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:12.653299+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:13.653538+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805ac0a000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:14.653683+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997849 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:15.653974+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:16.654189+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d650800 session 0x55805a67a960
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d651c00 session 0x55805a7e5860
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:17.654346+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:18.654583+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:19.654960+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997258 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.066018105s of 12.072667122s, submitted: 2
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:20.655091+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:21.655224+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:22.655376+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:23.655572+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:24.655697+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:25.655822+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:26.655920+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:27.656139+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:28.656266+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:29.656457+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996667 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:30.656591+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:31.656714+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:32.656907+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:33.657061+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:34.657197+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996667 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:35.657355+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:36.657480+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:37.657603+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:38.657725+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:39.657952+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996667 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:40.658063+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:41.658157+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:42.658297+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.506420135s of 22.516693115s, submitted: 3
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:43.658426+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:44.658619+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:45.658821+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:46.659077+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:47.659262+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:48.659472+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:49.659680+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:50.659854+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:51.660045+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:52.660299+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:53.660482+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:54.660667+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:55.660884+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:56.661097+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:57.661311+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:58.661503+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T20:59:59.661724+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:00.661950+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:01.662280+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:02.662541+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 1679360 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:03.662829+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 1679360 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:04.663096+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 9173 writes, 35K keys, 9173 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9173 writes, 2093 syncs, 4.38 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 845 writes, 1350 keys, 845 commit groups, 1.0 writes per commit group, ingest: 0.45 MB, 0.00 MB/s
                                           Interval WAL: 845 writes, 399 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5580590769b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5580590769b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5580590769b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:05.663243+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:06.663410+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:07.663556+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:08.663703+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:09.663932+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:10.664098+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:11.664251+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:12.664405+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:13.664551+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:14.664690+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:15.664829+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:16.664921+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:17.665096+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:18.665290+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a7f0000 session 0x55805b434b40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9c00 session 0x55805d3f05a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:19.665500+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:20.665688+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread fragmentation_score=0.000031 took=0.000034s
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:21.665893+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:22.666116+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:23.666288+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:24.666455+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:25.666682+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:26.666850+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:27.667059+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:28.667190+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:29.667411+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a7f0000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 47.324840546s of 47.329608917s, submitted: 1
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996667 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:30.667547+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:31.667692+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:32.667809+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:33.667984+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:34.668145+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996667 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:35.668317+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:36.668544+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:37.668749+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:38.669002+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:39.669337+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998179 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:40.669559+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:41.669753+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:42.670000+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:43.670178+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.818011284s of 13.826013565s, submitted: 2
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:44.670342+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:45.670481+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:46.670639+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:47.670809+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:48.670983+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:49.671157+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:50.671272+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:51.671395+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:52.671514+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:53.671684+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:54.671823+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:55.672046+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:56.672232+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:57.672398+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:58.672566+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:00:59.672810+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:00.672928+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:01.673054+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:02.673202+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:03.673353+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805ac0a000 session 0x55805cc80b40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a935800 session 0x55805cc7c1e0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:04.673483+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:05.673636+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:06.673755+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:07.673953+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:08.674168+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:09.674424+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:10.674549+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:11.674697+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:12.674850+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:13.675097+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:14.675257+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:15.675427+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:16.675578+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 33.236343384s of 33.268554688s, submitted: 1
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:17.675709+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:18.675849+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:19.676031+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998179 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:20.676178+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d650800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:21.676318+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:22.676424+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:23.676585+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d651c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 1589248 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:24.676730+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001203 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 1589248 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:25.676932+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:26.677052+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:27.677302+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.904835701s of 10.988478661s, submitted: 3
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:28.677447+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:29.677646+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000480 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:30.677751+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:31.677928+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:32.678163+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:33.678323+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:34.678539+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9000 session 0x55805d862f00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a7f0000 session 0x55805cc7cb40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000480 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:35.678705+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:36.678948+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:37.679149+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:38.679289+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:39.679503+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000480 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:40.679672+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:41.679834+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:42.680002+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:43.680150+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:44.680313+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000480 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:45.680486+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a7f0000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.852489471s of 18.009616852s, submitted: 2
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:46.680623+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 1572864 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:47.680785+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 1572864 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:48.680936+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 1572864 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:49.681109+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000612 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:50.681277+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:51.681400+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:52.681567+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:53.681693+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:54.681937+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999430 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:55.682144+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:56.682388+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:57.682583+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:58.682733+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:01:59.682975+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.888288498s of 13.898387909s, submitted: 3
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999298 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:00.683139+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:01.683579+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:02.683704+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:03.683849+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:04.684219+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:05.684370+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999298 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:06.684571+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:07.685062+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:08.685894+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:09.686099+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:10.686253+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999298 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:11.686433+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d651c00 session 0x55805d564f00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c400 session 0x55805b7d74a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:12.686652+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:13.686838+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:14.687091+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.887771606s of 14.891558647s, submitted: 1
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [0,0,2])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:15.687323+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999370 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 1425408 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:16.687475+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 65536 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:17.687648+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:18.688132+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:19.688468+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:20.689204+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999298 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:21.689899+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:22.690070+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a935800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:23.690344+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:24.690587+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:25.690796+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999430 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:26.690954+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:27.691136+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:28.691357+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.794444084s of 14.003565788s, submitted: 364
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:29.691690+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:30.691927+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000942 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:31.692104+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:32.692326+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:33.692459+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:34.692617+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a7f0000 session 0x55805c455c20
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:35.692820+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000942 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:36.692967+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:37.693110+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:38.693254+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:39.693594+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:40.694018+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000810 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:41.694844+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:42.695227+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:43.696085+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:44.696217+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:45.696452+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.896261215s of 16.902111053s, submitted: 2
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000942 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:46.696811+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:47.697445+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:48.697772+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:49.698015+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:50.698250+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002454 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:51.698484+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:52.698834+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:53.699006+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:54.699520+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:55.699791+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001863 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:56.699944+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:57.700153+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:58.700368+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:02:59.700639+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:00.700816+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001863 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.281729698s of 15.294480324s, submitted: 3
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:01.701077+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:02.701254+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:03.701451+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:04.701617+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:05.701743+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001731 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:06.701927+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:07.702104+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:08.702296+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:09.702564+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:10.702754+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001731 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:11.702899+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:12.703048+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:13.703201+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:14.703328+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:15.703453+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001731 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:16.703590+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:17.703816+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:18.704006+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:19.704218+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:20.704339+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001731 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d650800 session 0x55805c455a40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d6b7400 session 0x55805d4ae1e0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:21.704480+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:22.704611+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:23.704712+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:24.704895+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:25.705010+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001731 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:26.705142+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:27.705267+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:28.705393+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9000 session 0x55805c7f0000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a935800 session 0x55805c7f10e0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:29.705530+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:30.705671+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001731 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:31.705822+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a7f0000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.912832260s of 30.916051865s, submitted: 1
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:32.705926+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:33.706119+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:34.706202+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:35.706366+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001863 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:36.706521+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:37.706667+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:38.706759+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:39.706946+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d650800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:40.707091+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001995 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:41.707250+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:42.707412+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:43.707590+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:44.707755+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:45.707970+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001995 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:46.708153+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d651c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.676693916s of 14.759789467s, submitted: 2
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:47.708316+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:48.708512+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:49.708714+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:50.708890+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9c00 session 0x55805cc7eb40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004296 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:51.709053+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:52.709185+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:53.709931+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:54.710126+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:55.710266+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004296 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:56.710413+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:57.710561+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.021399498s of 11.037414551s, submitted: 4
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:58.710713+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:03:59.710925+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:00.711060+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004164 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:01.711210+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:02.711364+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:03.711495+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:04.711727+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:05.711917+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007320 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:06.712081+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:07.712211+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805ac0a000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.964635849s of 10.006252289s, submitted: 4
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:08.712323+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:09.712506+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:10.712624+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007650 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:11.712760+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:12.712914+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:13.713065+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:14.713201+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:15.713325+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:16.713433+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:17.713557+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:18.713736+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:19.713968+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:20.714086+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:21.714249+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:22.714384+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:23.714533+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:24.714675+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:25.714844+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:26.715011+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:27.715144+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:28.715285+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:29.715481+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:30.715621+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:31.715736+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:32.715898+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:33.716032+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:34.716154+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:35.716364+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:36.716543+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:37.716682+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:38.716850+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:39.717158+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:40.717312+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:41.717457+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:42.717697+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c400 session 0x55805cc80780
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a7f0000 session 0x55805b7f4d20
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:43.717991+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:44.718137+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:45.718333+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:46.718518+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:47.718705+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:48.718852+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:49.719096+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:50.719237+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:51.719434+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:52.719618+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:53.719807+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a7f0000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 46.184024811s of 46.203655243s, submitted: 4
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:54.720111+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:55.720261+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007650 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:56.720424+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:57.720648+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:58.720842+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:04:59.721094+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a935800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:00.721260+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007650 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:01.721402+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:02.721519+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d651c00 session 0x55805d5652c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d650800 session 0x55805c7ef0e0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:03.721723+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:04.721915+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:05.722134+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.059599876s of 12.068504333s, submitted: 2
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006468 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:06.722429+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:07.722566+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:08.722746+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:09.722974+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:10.723149+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006468 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:11.723296+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:12.723442+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:13.723559+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:14.723751+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:15.723932+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 142 handle_osd_map epochs [143,144], i have 142, src has [1,144]
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.084339142s of 10.095458031s, submitted: 3
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1016839 data_alloc: 218103808 data_used: 266240
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88047616 unmapped: 2088960 heap: 90136576 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:16.724064+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _renew_subs
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 145 ms_handle_reset con 0x55805a9f9c00 session 0x55805d4aeb40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 2031616 heap: 90136576 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:17.724216+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fc5dc000/0x0/0x4ffc00000, data 0x170fbd/0x22e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88227840 unmapped: 18694144 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:18.724362+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _renew_subs
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 146 ms_handle_reset con 0x55805b24c400 session 0x55805b69cf00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:19.724546+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:20.724670+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134726 data_alloc: 218103808 data_used: 274432
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:21.724854+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x11730f8/0x1233000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:22.725149+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:23.725304+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x11730f8/0x1233000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:24.725416+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:25.725534+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137156 data_alloc: 218103808 data_used: 274432
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:26.725742+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88276992 unmapped: 18644992 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:27.725909+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88276992 unmapped: 18644992 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:28.726034+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88276992 unmapped: 18644992 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:29.726219+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.137514114s of 14.367403030s, submitted: 61
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:30.726373+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137024 data_alloc: 218103808 data_used: 274432
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:31.726509+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:32.726641+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:33.726828+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:34.727061+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805a9f9000 session 0x55805d4afe00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:35.727263+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137024 data_alloc: 218103808 data_used: 274432
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:36.727500+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:37.727693+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:38.727787+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:39.727975+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:40.728107+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137024 data_alloc: 218103808 data_used: 274432
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:41.728339+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:42.728547+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:43.728660+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:44.728801+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:45.728937+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.764553070s of 15.769596100s, submitted: 1
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136316 data_alloc: 218103808 data_used: 274432
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:46.729101+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:47.729256+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d6000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88301568 unmapped: 18620416 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:48.729394+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:49.729569+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:50.729708+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d6000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139340 data_alloc: 218103808 data_used: 274432
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:51.729936+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:52.730310+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:53.730779+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:54.731144+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:55.731469+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138749 data_alloc: 218103808 data_used: 274432
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:56.731714+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d6000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:57.731883+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:58.732054+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:05:59.732275+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:00.732404+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d6000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138749 data_alloc: 218103808 data_used: 274432
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:01.733563+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:02.734609+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805b24c400 session 0x55805c4534a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d650800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805d650800 session 0x55805d3f14a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d6000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d651c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805d651c00 session 0x55805d4721e0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:03.735545+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.763429642s of 17.799776077s, submitted: 4
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d740000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805d740000 session 0x55805b7d6b40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d64a800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805d64a800 session 0x55805d3f05a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:04.736214+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88326144 unmapped: 18595840 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805b24c400 session 0x55805d8be960
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d650800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:05.736571+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88326144 unmapped: 18595840 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138769 data_alloc: 218103808 data_used: 278528
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:06.737273+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88326144 unmapped: 18595840 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _renew_subs
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 147 handle_osd_map epochs [148,148], i have 148, src has [1,148]
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:07.737760+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88334336 unmapped: 18587648 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb5d1000/0x0/0x4ffc00000, data 0x11771be/0x123a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:08.738110+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88334336 unmapped: 18587648 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:09.738289+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _renew_subs
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 18579456 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 149 ms_handle_reset con 0x55805d650800 session 0x55805d92ad20
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d651c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 149 ms_handle_reset con 0x55805d651c00 session 0x55805d8bfa40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d740000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 149 ms_handle_reset con 0x55805d740000 session 0x55805a6734a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f1c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 149 ms_handle_reset con 0x55805e2f1c00 session 0x55805cfb3a40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 149 ms_handle_reset con 0x55805b24c400 session 0x55805a7e7680
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:10.738437+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 21446656 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa6d5000/0x0/0x4ffc00000, data 0x207136b/0x2136000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1268233 data_alloc: 218103808 data_used: 278528
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:11.738581+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa6d5000/0x0/0x4ffc00000, data 0x207136b/0x2136000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 21446656 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:12.738723+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 21446656 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:13.738897+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 21446656 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d650800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 149 ms_handle_reset con 0x55805d650800 session 0x55805cc7cd20
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d651c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.449963570s of 10.389714241s, submitted: 77
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d740000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:14.739067+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 21430272 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:15.739185+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 90390528 unmapped: 20733952 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa6d5000/0x0/0x4ffc00000, data 0x207136b/0x2136000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365949 data_alloc: 234881024 data_used: 14716928
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:16.739346+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:17.739468+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:18.739596+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:19.739840+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa6d5000/0x0/0x4ffc00000, data 0x207136b/0x2136000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:20.740013+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:21.740300+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367067 data_alloc: 234881024 data_used: 14716928
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:22.740550+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:23.740740+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:24.741511+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa6d2000/0x0/0x4ffc00000, data 0x207333d/0x2139000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:25.742016+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:26.742201+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367675 data_alloc: 234881024 data_used: 14733312
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.585947990s of 12.599705696s, submitted: 21
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:27.742316+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114442240 unmapped: 876544 heap: 115318784 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:28.742460+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8928000/0x0/0x4ffc00000, data 0x2c7e33d/0x2d44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [0,0,0,0,0,8])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 1097728 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88fe000/0x0/0x4ffc00000, data 0x2ca733d/0x2d6d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:29.742625+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 3555328 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:30.742755+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 3555328 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:31.743767+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475763 data_alloc: 234881024 data_used: 16691200
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 3555328 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:32.744401+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 3555328 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:33.744680+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 3555328 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88f3000/0x0/0x4ffc00000, data 0x2cb333d/0x2d79000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:34.744854+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 3522560 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88f0000/0x0/0x4ffc00000, data 0x2cb633d/0x2d7c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:35.745017+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 3522560 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:36.745233+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475931 data_alloc: 234881024 data_used: 16703488
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 3522560 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:37.745429+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113909760 unmapped: 3506176 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88f0000/0x0/0x4ffc00000, data 0x2cb633d/0x2d7c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:38.745924+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113917952 unmapped: 3497984 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:39.746221+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113917952 unmapped: 3497984 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:40.746539+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113917952 unmapped: 3497984 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.262884140s of 14.072373390s, submitted: 141
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:41.746804+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476763 data_alloc: 234881024 data_used: 16764928
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113934336 unmapped: 3481600 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88ef000/0x0/0x4ffc00000, data 0x2cb733d/0x2d7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:42.746980+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113934336 unmapped: 3481600 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:43.747210+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 3473408 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:44.747341+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88ef000/0x0/0x4ffc00000, data 0x2cb733d/0x2d7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 3473408 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:45.747540+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 3473408 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:46.747676+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476763 data_alloc: 234881024 data_used: 16764928
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 3473408 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88ef000/0x0/0x4ffc00000, data 0x2cb733d/0x2d7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:47.747852+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 3440640 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:48.748150+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 3440640 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:49.748307+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 3440640 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:50.748464+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805daf9c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9c00 session 0x55805cc7c1e0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805daf9800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 3440640 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9800 session 0x55805cc80000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:51.748638+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476259 data_alloc: 234881024 data_used: 16764928
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88ef000/0x0/0x4ffc00000, data 0x2cb733d/0x2d7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 3440640 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805daf9400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.868956566s of 10.927964211s, submitted: 6
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9400 session 0x55805c7f03c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805b24c400 session 0x55805cd7c1e0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d650800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d650800 session 0x55805d92a5a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805daf9800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9800 session 0x55805d92bc20
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805daf9c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9c00 session 0x55805a7e7e00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:52.748818+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 14835712 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7c62000/0x0/0x4ffc00000, data 0x394339f/0x3a0a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:53.748927+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0c00 session 0x55805d4af4a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 14835712 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:54.749077+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 14835712 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:55.749232+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805b24c400 session 0x55805cd1d4a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 14835712 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:56.749364+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1570244 data_alloc: 234881024 data_used: 16769024
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 14835712 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d650800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d650800 session 0x55805cc80f00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0c00 session 0x55805a7e5860
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:57.749536+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805daf9800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805daf9c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 15720448 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:58.749621+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7c3c000/0x0/0x4ffc00000, data 0x39673d2/0x3a30000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114491392 unmapped: 14475264 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:06:59.749841+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:00.749991+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:01.750154+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1665355 data_alloc: 234881024 data_used: 26218496
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:02.750300+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:03.750431+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:04.750597+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7c3c000/0x0/0x4ffc00000, data 0x39673d2/0x3a30000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:05.750776+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:06.750915+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1665355 data_alloc: 234881024 data_used: 26218496
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:07.751047+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:08.751184+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121716736 unmapped: 7249920 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.360017776s of 17.521881104s, submitted: 38
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:09.751354+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 127156224 unmapped: 6299648 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7c3c000/0x0/0x4ffc00000, data 0x39673d2/0x3a30000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f6c19000/0x0/0x4ffc00000, data 0x498a3d2/0x4a53000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:10.751497+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 7643136 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:11.751608+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1796399 data_alloc: 234881024 data_used: 26443776
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f6bf8000/0x0/0x4ffc00000, data 0x49ab3d2/0x4a74000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:12.751754+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:13.751960+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:14.752083+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:15.752236+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:16.752354+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1796687 data_alloc: 234881024 data_used: 26435584
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f6bf7000/0x0/0x4ffc00000, data 0x49ab3d2/0x4a74000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:17.752482+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:18.752672+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9800 session 0x55805a6730e0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9c00 session 0x55805a7e6b40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805daf9c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.716887474s of 10.034674644s, submitted: 127
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:19.752933+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9c00 session 0x55805d8be3c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 17383424 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:20.753066+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 17383424 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f851c000/0x0/0x4ffc00000, data 0x2cb733d/0x2d7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:21.753216+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1492452 data_alloc: 234881024 data_used: 12238848
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 17383424 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:22.753342+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 17383424 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:23.753523+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f851b000/0x0/0x4ffc00000, data 0x2cb833d/0x2d7e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 17383424 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:24.753682+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 17383424 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d651c00 session 0x55805d25e000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d740000 session 0x55805cf8d4a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:25.753815+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805b24c400 session 0x55805d8be960
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 26722304 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88ee000/0x0/0x4ffc00000, data 0x2cb833d/0x2d7e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:26.753989+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180934 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 26722304 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805ac0a000 session 0x55805c454b40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805cfb2f00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:27.754123+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 26722304 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:28.754255+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 26722304 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:29.754408+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:30.754543+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805b69d2c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:31.754680+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180934 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:32.754828+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:33.754969+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:34.755144+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:35.755261+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:36.755377+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180934 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:37.755532+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805ac0a000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.443714142s of 18.708480835s, submitted: 87
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:38.755692+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:39.755856+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:40.756091+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:41.756263+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181066 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:42.756406+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:43.756537+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805b24c400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:44.756722+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:45.756848+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 33972224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805cd7c3c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0c00 session 0x55805cd7cf00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8e1c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8e1c00 session 0x55805b7f5c20
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805b7f5a40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b7f4780
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:46.757007+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207078 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 33972224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:47.757152+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 33972224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:48.757324+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 33972224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa062000/0x0/0x4ffc00000, data 0x15452db/0x160a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:49.757518+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805b7f43c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 33972224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0c00 session 0x55805b2a5a40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:50.757677+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8e1400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 33972224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8e1400 session 0x55805b2a4b40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.920416832s of 12.955449104s, submitted: 4
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805b2a4960
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:51.757819+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211460 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 34390016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c2d000/0x0/0x4ffc00000, data 0x15692eb/0x162f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:52.757954+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 34390016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:53.758065+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 34390016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:54.758186+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 34390016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c2d000/0x0/0x4ffc00000, data 0x15692eb/0x162f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:55.758286+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c2d000/0x0/0x4ffc00000, data 0x15692eb/0x162f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 34390016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:56.758437+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c2d000/0x0/0x4ffc00000, data 0x15692eb/0x162f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238384 data_alloc: 218103808 data_used: 4263936
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:57.758549+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:58.758641+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:07:59.758770+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:00.758975+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:01.759134+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238384 data_alloc: 218103808 data_used: 4263936
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:02.759295+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c2d000/0x0/0x4ffc00000, data 0x15692eb/0x162f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:03.759453+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.594253540s of 12.610000610s, submitted: 4
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109379584 unmapped: 31424512 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9a24000/0x0/0x4ffc00000, data 0x17722eb/0x1838000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:04.759634+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9a07000/0x0/0x4ffc00000, data 0x178f2eb/0x1855000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 31088640 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:05.759792+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 31088640 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:06.759939+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264450 data_alloc: 218103808 data_used: 4370432
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 31088640 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99ff000/0x0/0x4ffc00000, data 0x17972eb/0x185d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:07.760082+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 31088640 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:08.760681+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 31088640 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:09.760942+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 31088640 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:10.761072+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 32440320 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:11.761238+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263650 data_alloc: 218103808 data_used: 4370432
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 32440320 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:12.761381+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fd000/0x0/0x4ffc00000, data 0x17992eb/0x185f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 32440320 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:13.761519+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 32440320 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:14.761635+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 32440320 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:15.761812+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 32440320 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.573128700s of 12.650348663s, submitted: 29
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:16.761972+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263874 data_alloc: 218103808 data_used: 4370432
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:17.762125+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fc000/0x0/0x4ffc00000, data 0x179a2eb/0x1860000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:18.762275+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:19.762447+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:20.762617+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:21.762842+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263874 data_alloc: 218103808 data_used: 4370432
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fc000/0x0/0x4ffc00000, data 0x179a2eb/0x1860000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:22.763044+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:23.763157+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fc000/0x0/0x4ffc00000, data 0x179a2eb/0x1860000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:24.763347+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:25.763474+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:26.763601+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263874 data_alloc: 218103808 data_used: 4370432
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:27.763733+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fc000/0x0/0x4ffc00000, data 0x179a2eb/0x1860000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fc000/0x0/0x4ffc00000, data 0x179a2eb/0x1860000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:28.763931+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fc000/0x0/0x4ffc00000, data 0x179a2eb/0x1860000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805cfb3680
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805cd74000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:29.764187+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.208628654s of 13.212368965s, submitted: 1
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0c00 session 0x55805b7f43c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:30.764425+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:31.764632+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186319 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:32.764826+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:33.764986+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:34.765122+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:35.765299+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:36.765511+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186319 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:37.765659+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:38.765810+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:39.766018+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:40.766151+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:41.766275+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186319 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:42.766404+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:43.766531+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:44.766650+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:45.766970+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:46.767129+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186319 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:47.767323+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:48.767491+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:49.767660+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8e1000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8e1000 session 0x55805cfb2f00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d8be3c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805a7e6b40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805d25e000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.695281982s of 20.782047272s, submitted: 14
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:50.767834+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0c00 session 0x55805d25e5a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dad9400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad9400 session 0x55805a673860
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805a672960
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d8be780
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805cfb34a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 35627008 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:51.767980+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251055 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 35627008 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:52.768233+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 35627008 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9955000/0x0/0x4ffc00000, data 0x184034d/0x1907000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:53.768387+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 35627008 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:54.768558+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 35627008 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0c00 session 0x55805a7e74a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:55.768711+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9955000/0x0/0x4ffc00000, data 0x184034d/0x1907000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dad8400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8400 session 0x55805b435c20
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 35627008 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:56.768891+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9955000/0x0/0x4ffc00000, data 0x184034d/0x1907000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d56d4a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251055 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 34578432 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d92b0e0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dad8400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:57.768998+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8400 session 0x55805d92a5a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805b69d2c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 34537472 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:58.769135+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01a000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 34537472 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01a000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:08:59.769293+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 34537472 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:00.769418+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0c00 session 0x55805b69cb40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d567c20
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d567680
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dad8400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8400 session 0x55805d566780
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 34570240 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.721254349s of 10.977932930s, submitted: 86
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:01.769537+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805d5663c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dad9c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad9c00 session 0x55805d25ef00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d25fe00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d56d4a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dad8400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8400 session 0x55805d56de00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241073 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 33996800 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:02.769700+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 33996800 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:03.769844+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 33996800 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:04.770031+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805d92a5a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9af4000/0x0/0x4ffc00000, data 0x16a22eb/0x1768000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 33996800 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dad9800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad9800 session 0x55805a7e6b40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:05.770149+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805a7e74a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805cfb3680
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106815488 unmapped: 33988608 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:06.770273+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dad8400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242887 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106815488 unmapped: 33988608 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:07.770401+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9af3000/0x0/0x4ffc00000, data 0x16a22fb/0x1769000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:08.770560+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:09.770706+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:10.770841+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:11.770958+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1279063 data_alloc: 218103808 data_used: 5537792
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9af3000/0x0/0x4ffc00000, data 0x16a22fb/0x1769000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:12.771098+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:13.771226+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9af3000/0x0/0x4ffc00000, data 0x16a22fb/0x1769000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:14.771365+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9af3000/0x0/0x4ffc00000, data 0x16a22fb/0x1769000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:15.771477+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9af3000/0x0/0x4ffc00000, data 0x16a22fb/0x1769000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107864064 unmapped: 32940032 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:16.771592+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1279063 data_alloc: 218103808 data_used: 5537792
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107864064 unmapped: 32940032 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:17.771728+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.606376648s of 16.673978806s, submitted: 14
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 28213248 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:18.771890+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113213440 unmapped: 27590656 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:19.772043+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 27705344 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9099000/0x0/0x4ffc00000, data 0x20f42fb/0x21bb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:20.772140+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 27705344 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:21.772299+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1368463 data_alloc: 218103808 data_used: 6819840
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 27705344 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:22.772416+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 27697152 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:23.772528+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9099000/0x0/0x4ffc00000, data 0x20f42fb/0x21bb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 27697152 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:24.772800+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 27697152 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:25.772950+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9099000/0x0/0x4ffc00000, data 0x20f42fb/0x21bb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 28254208 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:26.773131+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362511 data_alloc: 218103808 data_used: 6823936
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 28254208 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:27.773279+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 28254208 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:28.773411+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 28254208 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:29.773552+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 28254208 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:30.773683+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 28246016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:31.773800+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.666546822s of 14.050541878s, submitted: 114
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f909e000/0x0/0x4ffc00000, data 0x20f72fb/0x21be000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362735 data_alloc: 218103808 data_used: 6823936
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 28246016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:32.773854+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 28237824 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:33.773977+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 28237824 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dad8800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:34.774080+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8800 session 0x55805b434780
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacac00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d92ab40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacb000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacb000 session 0x55805cd745a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805cd74f00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d8bfe00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 27705344 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:35.774224+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cc1000/0x0/0x4ffc00000, data 0x24d335d/0x259b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 27672576 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:36.774377+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1396522 data_alloc: 218103808 data_used: 6823936
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 27672576 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:37.774547+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 27672576 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:38.774715+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 27672576 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:39.774960+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cc1000/0x0/0x4ffc00000, data 0x24d335d/0x259b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 27639808 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:40.775082+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacac00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d8be5a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cc1000/0x0/0x4ffc00000, data 0x24d335d/0x259b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dad8800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8800 session 0x55805d8bf2c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 27623424 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:41.775231+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacb400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacb400 session 0x55805d8bf860
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.050792694s of 10.149922371s, submitted: 31
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1398336 data_alloc: 218103808 data_used: 6823936
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d8be000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 27623424 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:42.775367+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacac00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 27623424 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:43.775521+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 27525120 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:44.775683+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115122176 unmapped: 25681920 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:45.775849+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cc0000/0x0/0x4ffc00000, data 0x24d336d/0x259c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115154944 unmapped: 25649152 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:46.776100+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425240 data_alloc: 234881024 data_used: 10756096
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115163136 unmapped: 25640960 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:47.776222+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115163136 unmapped: 25640960 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:48.776399+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115171328 unmapped: 25632768 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:49.776582+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cc0000/0x0/0x4ffc00000, data 0x24d336d/0x259c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115171328 unmapped: 25632768 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:50.776720+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cc0000/0x0/0x4ffc00000, data 0x24d336d/0x259c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 25608192 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:51.776945+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425048 data_alloc: 234881024 data_used: 10756096
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 25608192 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:52.777062+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 25608192 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:53.777208+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.226175308s of 12.244213104s, submitted: 6
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 25608192 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:54.777333+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8b1f000/0x0/0x4ffc00000, data 0x267436d/0x273d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118652928 unmapped: 22151168 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:55.777477+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 22839296 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:56.777587+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494334 data_alloc: 234881024 data_used: 11829248
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 22831104 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:57.777724+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 22831104 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:58.777937+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 22831104 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:09:59.778101+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 22765568 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:00.778331+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f84ce000/0x0/0x4ffc00000, data 0x2cbd36d/0x2d86000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 22765568 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:01.778575+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 23 21:28:08 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/18556835' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1488982 data_alloc: 234881024 data_used: 11833344
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 22765568 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:02.778794+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118046720 unmapped: 22757376 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:03.778954+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b7f4f00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805c7f0960
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dad8800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118046720 unmapped: 22757376 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:04.779126+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.527749062s of 10.052300453s, submitted: 105
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 11K writes, 43K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 3004 syncs, 3.82 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2301 writes, 7858 keys, 2301 commit groups, 1.0 writes per commit group, ingest: 8.46 MB, 0.01 MB/s
                                           Interval WAL: 2301 writes, 911 syncs, 2.53 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f84d3000/0x0/0x4ffc00000, data 0x2cc036d/0x2d89000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [1])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8800 session 0x55805d92af00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 24436736 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:05.779271+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:06.779440+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 24436736 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1371249 data_alloc: 218103808 data_used: 6823936
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:07.779611+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 24436736 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:08.779713+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 24436736 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8400 session 0x55805d8be3c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805b7d7860
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cbd000/0x0/0x4ffc00000, data 0x20f92fb/0x21c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:09.779906+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d4721e0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:10.780056+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:11.781071+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215158 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:12.781198+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:13.782142+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:14.782310+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:15.782473+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:16.782646+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215158 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:17.782833+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:18.782976+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:19.783444+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:20.783576+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:21.783715+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215158 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:22.783851+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:23.784112+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:24.784246+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:25.784397+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:26.784707+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215158 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:27.784861+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:28.785071+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:29.785342+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:30.785560+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:31.785736+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215158 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:32.785995+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:33.786259+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:34.786455+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:35.786631+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:36.786813+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:37.787060+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215158 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110673920 unmapped: 30130176 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:38.787233+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110673920 unmapped: 30130176 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.834026337s of 34.967418671s, submitted: 43
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:39.787399+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b7d6780
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacac00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d8ad4a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dad8800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8800 session 0x55805d8ac3c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d8ad680
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d8ad2c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 29532160 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:40.787555+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 29532160 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:41.787726+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 29532160 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:42.787939+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233960 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 29532160 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9ea6000/0x0/0x4ffc00000, data 0x12f12db/0x13b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:43.788098+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 29532160 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:44.788243+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 29532160 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacac00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d8ad860
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:45.789805+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacb800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:46.789958+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:47.790094+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244525 data_alloc: 218103808 data_used: 1630208
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9ea6000/0x0/0x4ffc00000, data 0x12f12db/0x13b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:48.790263+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:49.790436+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:50.790594+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:51.790768+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:52.790984+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244525 data_alloc: 218103808 data_used: 1630208
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 29499392 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9ea6000/0x0/0x4ffc00000, data 0x12f12db/0x13b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:53.791183+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 29499392 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:54.791329+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9ea6000/0x0/0x4ffc00000, data 0x12f12db/0x13b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 29499392 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:55.791456+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 29499392 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:56.791569+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.569715500s of 17.644144058s, submitted: 15
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112713728 unmapped: 28090368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:57.791736+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335881 data_alloc: 218103808 data_used: 1634304
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 26198016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:58.791919+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91fb000/0x0/0x4ffc00000, data 0x1f9c2db/0x2061000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [0,0,1])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115662848 unmapped: 25141248 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:10:59.792064+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ed000/0x0/0x4ffc00000, data 0x1faa2db/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115662848 unmapped: 25141248 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:00.792222+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:01.792365+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ed000/0x0/0x4ffc00000, data 0x1faa2db/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:02.792478+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350955 data_alloc: 218103808 data_used: 2863104
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:03.792596+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:04.792704+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:05.792843+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:06.793034+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ed000/0x0/0x4ffc00000, data 0x1faa2db/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:07.793232+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350971 data_alloc: 218103808 data_used: 2863104
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:08.793412+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 25124864 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:09.793624+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 25124864 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:10.793848+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 25124864 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:11.794172+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ed000/0x0/0x4ffc00000, data 0x1faa2db/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 25116672 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:12.794365+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350971 data_alloc: 218103808 data_used: 2863104
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 25116672 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:13.794520+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 25116672 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:14.794644+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 25116672 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:15.794767+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:16.794947+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:17.795057+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351123 data_alloc: 218103808 data_used: 2867200
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ed000/0x0/0x4ffc00000, data 0x1faa2db/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:18.795206+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ed000/0x0/0x4ffc00000, data 0x1faa2db/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:19.795435+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:20.795585+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:21.795724+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacbc00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacbc00 session 0x55805d56d680
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805d56d4a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d56de00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:22.795843+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d25f860
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351123 data_alloc: 218103808 data_used: 2867200
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.964529037s of 25.611534119s, submitted: 88
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805d25ef00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacac00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d567c20
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacbc00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacbc00 session 0x55805d566780
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d5663c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b69d2c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 33177600 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:23.795992+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 33177600 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:24.796164+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 33177600 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f855c000/0x0/0x4ffc00000, data 0x2c3a2eb/0x2d00000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:25.796278+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 33177600 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:26.796366+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 33177600 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:27.796494+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1437857 data_alloc: 218103808 data_used: 2867200
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f855c000/0x0/0x4ffc00000, data 0x2c3a2eb/0x2d00000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 33177600 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805d4afe00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:28.796631+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacac00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114991104 unmapped: 33161216 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:29.796792+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 27746304 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:30.796917+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 25231360 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:31.797055+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 25231360 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:32.797180+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528574 data_alloc: 234881024 data_used: 14991360
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 25223168 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:33.797291+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f855a000/0x0/0x4ffc00000, data 0x2c3b2eb/0x2d01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 25190400 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:34.797433+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f855a000/0x0/0x4ffc00000, data 0x2c3b2eb/0x2d01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123002880 unmapped: 25149440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:35.797584+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123002880 unmapped: 25149440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:36.797719+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123002880 unmapped: 25149440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:37.797977+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528574 data_alloc: 234881024 data_used: 14991360
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f855a000/0x0/0x4ffc00000, data 0x2c3b2eb/0x2d01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123002880 unmapped: 25149440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:38.798127+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123002880 unmapped: 25149440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:39.798312+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.176643372s of 17.280221939s, submitted: 17
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123772928 unmapped: 24379392 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:40.798459+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123772928 unmapped: 24379392 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:41.798589+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7989000/0x0/0x4ffc00000, data 0x380d2eb/0x38d3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 24338432 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:42.798781+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616990 data_alloc: 234881024 data_used: 15196160
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 24338432 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:43.798986+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 24338432 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:44.799169+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 24338432 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:45.799353+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 24338432 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:46.799496+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 24330240 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:47.799689+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616990 data_alloc: 234881024 data_used: 15196160
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 24330240 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:48.799840+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 24330240 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:49.800192+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 24322048 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:50.800466+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 24322048 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:51.800627+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 24322048 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:52.800924+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616990 data_alloc: 234881024 data_used: 15196160
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 24322048 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:53.801073+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 24322048 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:54.801231+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 24322048 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:55.801379+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:56.801619+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:57.801826+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a935800 session 0x55805d56c780
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a7f0000 session 0x55805b7f21e0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616990 data_alloc: 234881024 data_used: 15196160
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:58.801996+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:11:59.802176+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:00.802327+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:01.802466+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:02.802599+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616990 data_alloc: 234881024 data_used: 15196160
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:03.802781+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:04.802930+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:05.803062+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:06.803226+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:07.803418+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616990 data_alloc: 234881024 data_used: 15196160
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:08.803519+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a935800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.359371185s of 28.531023026s, submitted: 61
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:09.803692+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:10.803839+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 24297472 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:11.803980+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 24289280 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:12.804130+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1617122 data_alloc: 234881024 data_used: 15196160
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:13.804284+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 24289280 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:14.804400+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 24289280 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805a9f9c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:15.804624+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 24215552 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:16.804726+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 24076288 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:17.804942+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 22904832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616282 data_alloc: 234881024 data_used: 15196160
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:18.805155+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 22904832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:19.805330+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 22896640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:20.805474+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 22896640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:21.805619+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 22888448 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:22.805790+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 22888448 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616282 data_alloc: 234881024 data_used: 15196160
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:23.805917+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125272064 unmapped: 22880256 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.529578209s of 15.271432877s, submitted: 389
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:24.806037+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 23928832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:25.806179+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 23928832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:26.806333+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 23928832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:27.806497+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 23928832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616150 data_alloc: 234881024 data_used: 15196160
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:28.806620+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 23928832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:29.806778+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 23920640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:30.806909+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 23920640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:31.806989+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 23920640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:32.807156+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 23920640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616150 data_alloc: 234881024 data_used: 15196160
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:33.807289+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 23920640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:34.807434+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 23920640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:35.807556+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124239872 unmapped: 23912448 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d566000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.931664467s of 11.936676025s, submitted: 1
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc400 session 0x55805d863e00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805c453e00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1fab2db/0x2070000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:36.807998+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 30932992 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:37.808154+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 30932992 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1360030 data_alloc: 218103808 data_used: 2863104
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:38.808346+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 30932992 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:39.808576+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 30932992 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:40.808730+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1fab2db/0x2070000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 30932992 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:41.808954+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 30932992 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805d25e000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacb800 session 0x55805d3f14a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1fab2db/0x2070000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacb800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:42.809267+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacb800 session 0x55805d4afc20
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238042 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:43.809397+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:44.809544+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:45.809683+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:46.809839+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:47.809957+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238042 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:48.810082+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:49.810254+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:50.810460+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:51.810587+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:52.810754+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238042 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:53.810928+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:54.811074+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:55.811251+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:56.811439+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:57.811519+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238042 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:58.811624+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:12:59.811780+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:00.811965+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:01.812079+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:02.812264+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238042 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:03.812482+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d4ae000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc400 session 0x55805d862d20
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacac00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805c6323c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e2f0800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805d4ae960
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:04.812656+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.715570450s of 28.907997131s, submitted: 67
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115310592 unmapped: 32841728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b69cb40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc400 session 0x55805d4afa40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacac00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d3f10e0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacb800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacb800 session 0x55805a673860
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805d8623c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:05.812778+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115326976 unmapped: 32825344 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:06.812934+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115326976 unmapped: 32825344 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9e88000/0x0/0x4ffc00000, data 0x130f2db/0x13d4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d8ad0e0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:07.813077+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc400 session 0x55805d5641e0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115326976 unmapped: 32825344 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252852 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805cfb2000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacac00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:08.813221+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 33480704 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805b7f4960
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:09.813378+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 33480704 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacb800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d7da000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:10.813581+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 33480704 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9e87000/0x0/0x4ffc00000, data 0x130f2eb/0x13d5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:11.813749+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 33480704 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:12.813951+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 33480704 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258466 data_alloc: 218103808 data_used: 815104
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9e87000/0x0/0x4ffc00000, data 0x130f2eb/0x13d5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacb800 session 0x55805cd752c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805c3fcb40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:13.814129+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 33480704 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b7d61e0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:14.814341+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:15.814490+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:16.814646+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:17.814790+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240867 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:18.814929+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:19.815203+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:20.815835+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:21.816078+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:22.816305+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240867 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.481660843s of 18.543272018s, submitted: 18
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc400 session 0x55805cd7d2c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805d56de00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805dacac00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d56cf00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:23.816427+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805c452000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc400 session 0x55805c452780
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 33374208 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:24.816507+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 33374208 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:25.816635+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 33374208 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:26.816752+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f994c000/0x0/0x4ffc00000, data 0x184b2db/0x1910000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 33374208 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805d3f0d20
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:27.816889+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 33071104 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d7da000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d7da400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1292449 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9928000/0x0/0x4ffc00000, data 0x186f2db/0x1934000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:28.817009+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115146752 unmapped: 33005568 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:29.817213+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 31719424 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:30.817351+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 31719424 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805d5650e0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da400 session 0x55805d8adc20
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:31.817546+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 31719424 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:32.817718+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 34816000 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805a673860
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244819 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:33.817833+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 34816000 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: mgrc ms_handle_reset ms_handle_reset con 0x55805cfc4c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/844402651
Nov 23 21:28:08 compute-1 ceph-osd[77613]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/844402651,v1:192.168.122.100:6801/844402651]
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: get_auth_request con 0x55805d7da400 auth_method 0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: mgrc handle_mgr_configure stats_period=5
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:34.817978+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f8400 session 0x55805d863860
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d64ac00 session 0x55805b4350e0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6fc800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:35.818100+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:36.818267+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:37.818459+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244819 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:38.818588+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:39.818787+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:40.818966+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:41.819116+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:42.819239+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244819 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:43.819390+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:44.819513+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:45.819649+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:46.819717+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:47.819814+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244819 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:48.819936+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:49.820048+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:50.820154+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:51.820270+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:52.820448+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244819 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:53.821144+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:54.821328+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:55.822186+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:56.822519+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:57.822901+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244819 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:58.823171+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:13:59.823632+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d7da000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805d25fa40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c1000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c1000 session 0x55805d25e000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805d8ada40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0400 session 0x55805d92bc20
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 37.006832123s of 37.314971924s, submitted: 21
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:00.823833+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d8ac3c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d7da000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805d56c5a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805d65dc20
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c1000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c1000 session 0x55805d3f1680
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0c00 session 0x55805d8bf2c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 34373632 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:01.824047+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db4000/0x0/0x4ffc00000, data 0x13e22eb/0x14a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 34373632 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:02.824315+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db4000/0x0/0x4ffc00000, data 0x13e22eb/0x14a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 34373632 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277031 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db4000/0x0/0x4ffc00000, data 0x13e22eb/0x14a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:03.824562+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 34373632 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:04.824991+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 34365440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:05.825346+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b7d74a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 34365440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d7da000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805cd745a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:06.825558+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 34365440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805cc80b40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:07.825781+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0c00 session 0x55805cc812c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112230400 unmapped: 35921920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db3000/0x0/0x4ffc00000, data 0x13e22fb/0x14a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278845 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c1000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d73d400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:08.826208+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 36241408 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:09.826567+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db3000/0x0/0x4ffc00000, data 0x13e22fb/0x14a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:10.826702+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:11.826846+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:12.826941+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285837 data_alloc: 218103808 data_used: 1339392
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:13.827091+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db3000/0x0/0x4ffc00000, data 0x13e22fb/0x14a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:14.827234+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:15.827494+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:16.827659+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:17.827796+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285837 data_alloc: 218103808 data_used: 1339392
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:18.827935+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:19.828134+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db3000/0x0/0x4ffc00000, data 0x13e22fb/0x14a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.161880493s of 19.237621307s, submitted: 18
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 31727616 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:20.828299+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 31547392 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:21.828424+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 31866880 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:22.828582+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 31866880 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361745 data_alloc: 218103808 data_used: 1740800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:23.828737+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 31866880 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:24.828903+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9468000/0x0/0x4ffc00000, data 0x1d1e2fb/0x1de5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 31866880 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:25.829097+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 31866880 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:26.829324+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9468000/0x0/0x4ffc00000, data 0x1d1e2fb/0x1de5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:27.830356+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356265 data_alloc: 218103808 data_used: 1740800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9453000/0x0/0x4ffc00000, data 0x1d422fb/0x1e09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:28.830754+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:29.831469+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9453000/0x0/0x4ffc00000, data 0x1d422fb/0x1e09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:30.832073+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:31.832577+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9453000/0x0/0x4ffc00000, data 0x1d422fb/0x1e09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9453000/0x0/0x4ffc00000, data 0x1d422fb/0x1e09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:32.832732+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356569 data_alloc: 218103808 data_used: 1748992
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:33.832850+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.781532288s of 14.450411797s, submitted: 111
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:34.832972+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 31866880 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:35.833057+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c1000 session 0x55805b69d4a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805d25e1e0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 31842304 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b2a52c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9443000/0x0/0x4ffc00000, data 0x1d522fb/0x1e19000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:36.833416+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:37.833570+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253702 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:38.833696+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:39.833898+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:40.834123+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:41.834412+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:42.834727+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253702 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:43.834952+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:44.835094+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 31825920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:45.835271+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 31825920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:46.835428+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 31825920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:47.835641+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 31825920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253702 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:48.835773+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 31825920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:49.835991+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 31825920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:50.836189+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:51.836320+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:52.836466+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253702 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:53.836595+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:54.836734+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:55.836878+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:56.837077+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:57.837277+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d7da000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805d4730e0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805cc7fe00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0c00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0c00 session 0x55805cc7f2c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805cc7ed20
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d73d400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.686355591s of 23.753026962s, submitted: 20
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805a673680
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d7da000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805a6734a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 38707200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805cc810e0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fc000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc000 session 0x55805d3f1c20
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d3f01e0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306052 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:58.839302+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 38699008 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:14:59.839475+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 38699008 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f98d4000/0x0/0x4ffc00000, data 0x18c32db/0x1988000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:00.839637+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 38699008 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:01.839779+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 38699008 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:02.839947+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 38690816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306052 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:03.840086+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 38690816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f98d4000/0x0/0x4ffc00000, data 0x18c32db/0x1988000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d73d400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:04.840256+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805a7e63c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 38690816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d7da000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:05.840424+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 38690816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:06.840614+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:07.840781+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:08.840989+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355300 data_alloc: 218103808 data_used: 7626752
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:09.841208+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f98d4000/0x0/0x4ffc00000, data 0x18c32db/0x1988000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:10.841327+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:11.841404+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:12.841514+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f98d4000/0x0/0x4ffc00000, data 0x18c32db/0x1988000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:13.841652+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355300 data_alloc: 218103808 data_used: 7626752
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:14.841829+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fc400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.395429611s of 17.444917679s, submitted: 6
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc400 session 0x55805a7e6d20
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:15.841916+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fc800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc800 session 0x55805a7e65a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fcc00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fcc00 session 0x55805cd752c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d4aef00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d73d400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805c452780
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118292480 unmapped: 37216256 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:16.842067+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d80000/0x0/0x4ffc00000, data 0x24172db/0x24dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d80000/0x0/0x4ffc00000, data 0x24172db/0x24dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120340480 unmapped: 35168256 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:17.842210+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 119513088 unmapped: 35995648 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:18.842356+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475407 data_alloc: 218103808 data_used: 7741440
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120201216 unmapped: 35307520 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:19.842536+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87dd000/0x0/0x4ffc00000, data 0x29ba2db/0x2a7f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120201216 unmapped: 35307520 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:20.842719+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120201216 unmapped: 35307520 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:21.842936+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120201216 unmapped: 35307520 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:22.843096+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fc400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc400 session 0x55805d25fa40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87dd000/0x0/0x4ffc00000, data 0x29ba2db/0x2a7f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 35291136 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:23.843287+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fc800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc800 session 0x55805d4723c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483725 data_alloc: 218103808 data_used: 7733248
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 35291136 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fd000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fd000 session 0x55805d65d680
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:24.843435+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805cd74960
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d73d400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fc400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 35282944 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87ba000/0x0/0x4ffc00000, data 0x29db30e/0x2aa2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:25.843560+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123011072 unmapped: 32497664 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:26.843720+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 25288704 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:27.843848+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87ba000/0x0/0x4ffc00000, data 0x29db30e/0x2aa2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 25255936 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:28.844013+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566132 data_alloc: 234881024 data_used: 19611648
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 25255936 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:29.844195+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 25255936 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:30.844385+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87ba000/0x0/0x4ffc00000, data 0x29db30e/0x2aa2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 25223168 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:31.844535+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 25223168 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:32.844699+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 25223168 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:33.844848+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566132 data_alloc: 234881024 data_used: 19611648
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87ba000/0x0/0x4ffc00000, data 0x29db30e/0x2aa2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 25223168 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:34.844967+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 25223168 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:35.845134+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 25223168 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:36.845295+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.900854111s of 21.241012573s, submitted: 73
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134406144 unmapped: 21102592 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:37.845453+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87ba000/0x0/0x4ffc00000, data 0x29db30e/0x2aa2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [0,0,0,0,0,0,0,16,0,27])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 133488640 unmapped: 22020096 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:38.845595+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1629540 data_alloc: 234881024 data_used: 19615744
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 133537792 unmapped: 21970944 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:39.845775+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134168576 unmapped: 21340160 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:40.845911+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134168576 unmapped: 21340160 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:41.846032+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134168576 unmapped: 21340160 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:42.846164+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f44000/0x0/0x4ffc00000, data 0x325130e/0x3318000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134201344 unmapped: 21307392 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:43.846302+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1635894 data_alloc: 234881024 data_used: 19615744
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134201344 unmapped: 21307392 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:44.846471+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134201344 unmapped: 21307392 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:45.846729+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f41000/0x0/0x4ffc00000, data 0x325430e/0x331b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134225920 unmapped: 21282816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:46.846934+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134225920 unmapped: 21282816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:47.847102+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134225920 unmapped: 21282816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:48.847284+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1634766 data_alloc: 234881024 data_used: 19615744
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f41000/0x0/0x4ffc00000, data 0x325430e/0x331b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134225920 unmapped: 21282816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.255509377s of 12.832665443s, submitted: 75
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:49.847525+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f41000/0x0/0x4ffc00000, data 0x325430e/0x331b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134250496 unmapped: 21258240 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:50.847648+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134250496 unmapped: 21258240 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:51.847771+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f41000/0x0/0x4ffc00000, data 0x325430e/0x331b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134250496 unmapped: 21258240 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:52.847925+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134250496 unmapped: 21258240 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:53.848093+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1634766 data_alloc: 234881024 data_used: 19615744
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134250496 unmapped: 21258240 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:54.848378+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805d8be1e0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc400 session 0x55805d8bf0e0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fc800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f3b000/0x0/0x4ffc00000, data 0x325a30e/0x3321000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [1])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc800 session 0x55805d92b0e0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 28704768 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f3b000/0x0/0x4ffc00000, data 0x325a30e/0x3321000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:55.848526+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 28704768 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:56.848720+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 28704768 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:57.848965+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 28704768 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:58.849173+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416071 data_alloc: 218103808 data_used: 7733248
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805d472b40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805d5652c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 34758656 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.843377113s of 10.034677505s, submitted: 59
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:15:59.849368+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d565860
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9302000/0x0/0x4ffc00000, data 0x1e902db/0x1f55000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 34750464 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:00.849503+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 34750464 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:01.849709+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 34750464 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:02.849958+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 34750464 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:03.850529+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273869 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 34750464 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:04.851233+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:05.851577+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:06.851969+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:07.852154+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:08.852373+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273869 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:09.853315+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:10.854031+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:11.854597+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:12.855000+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:13.855250+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273869 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:14.855665+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:15.855953+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:16.856116+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:17.856275+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:18.856519+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273869 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:19.856788+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:20.857155+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:21.857444+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:22.857567+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:23.857698+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273869 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:24.857825+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d73d400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805a99b4a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fc400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc400 session 0x55805a99ba40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fc800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc800 session 0x55805d92a960
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d92af00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d73d400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.899166107s of 25.910942078s, submitted: 4
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805d92ba40
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:25.857932+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805d8ac3c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fc400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc400 session 0x55805b7f4780
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fd400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fd400 session 0x55805cfb30e0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805cfb23c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:26.858074+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:27.858302+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:28.858497+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x17a52db/0x186a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321099 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:29.858682+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:30.858858+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:31.859117+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d73d400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805cfb3c20
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805cfb3860
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:32.859259+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fc400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc400 session 0x55805c7f0000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fd800
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fd800 session 0x55805c7f12c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d6b7400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d73d400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:33.859408+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 35086336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321099 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:34.859539+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x17a52db/0x186a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:35.859669+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:36.859814+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x17a52db/0x186a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:37.859925+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:38.860098+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356667 data_alloc: 218103808 data_used: 5595136
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:39.860259+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:40.860370+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x17a52db/0x186a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x17a52db/0x186a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:41.860490+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:42.860639+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:43.860968+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356667 data_alloc: 218103808 data_used: 5595136
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:44.861097+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.807069778s of 19.842288971s, submitted: 9
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:45.861221+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x17a52db/0x186a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 31932416 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:46.861338+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d78000/0x0/0x4ffc00000, data 0x20002db/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:47.861603+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d78000/0x0/0x4ffc00000, data 0x20002db/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:48.861735+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1436141 data_alloc: 218103808 data_used: 6033408
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d78000/0x0/0x4ffc00000, data 0x20002db/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:49.861926+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:50.862061+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:51.862261+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:52.862392+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:53.862557+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1436141 data_alloc: 218103808 data_used: 6033408
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:54.862687+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d78000/0x0/0x4ffc00000, data 0x20002db/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:55.862906+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:56.863043+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:57.863165+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805c7f05a0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805cd7c3c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805d8c0000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.674288750s of 12.870928764s, submitted: 67
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d78000/0x0/0x4ffc00000, data 0x20002db/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805cc7e000
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:58.863305+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:16:59.863563+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:00.863698+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:01.863912+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:02.864037+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:03.864197+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:04.864346+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:05.864532+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:06.864679+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:08.349772+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:09.349907+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:10.350137+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:11.350272+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:12.350417+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:13.350568+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:14.350691+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:15.350910+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:16.351095+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:17.351294+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:18.351422+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:19.351534+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:20.351719+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:21.351847+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:22.351992+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:23.352203+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:24.352327+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:25.352620+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:26.352716+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:27.352809+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:28.352975+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:29.353123+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:30.353280+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:31.353414+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:32.353509+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:33.353653+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:34.353824+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:35.353979+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:36.354134+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:37.354280+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:38.354416+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:39.354581+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:40.355398+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:41.356028+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:42.356585+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:43.357080+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:44.357601+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:45.358010+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:46.358293+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:47.358511+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:48.358794+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:49.359058+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:50.359381+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:51.359522+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:52.359739+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:53.359984+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:54.360242+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:55.360420+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:56.360578+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:57.360814+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:58.361008+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:17:59.361148+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:00.361307+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:01.361504+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:02.361656+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:03.361818+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:04.361953+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:05.362115+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:06.362275+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:07.362425+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:08.362539+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:09.362657+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:10.362795+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:11.362987+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:12.363118+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121356288 unmapped: 34152448 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: do_command 'config diff' '{prefix=config diff}'
Nov 23 21:28:08 compute-1 ceph-osd[77613]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 23 21:28:08 compute-1 ceph-osd[77613]: do_command 'config show' '{prefix=config show}'
Nov 23 21:28:08 compute-1 ceph-osd[77613]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 23 21:28:08 compute-1 ceph-osd[77613]: do_command 'counter dump' '{prefix=counter dump}'
Nov 23 21:28:08 compute-1 ceph-osd[77613]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 23 21:28:08 compute-1 ceph-osd[77613]: do_command 'counter schema' '{prefix=counter schema}'
Nov 23 21:28:08 compute-1 ceph-osd[77613]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:13.363249+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 34668544 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:14.363679+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 34684928 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: do_command 'log dump' '{prefix=log dump}'
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:15.363920+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 131923968 unmapped: 34627584 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 23 21:28:08 compute-1 ceph-osd[77613]: do_command 'perf dump' '{prefix=perf dump}'
Nov 23 21:28:08 compute-1 ceph-osd[77613]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 23 21:28:08 compute-1 ceph-osd[77613]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 23 21:28:08 compute-1 ceph-osd[77613]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 23 21:28:08 compute-1 ceph-osd[77613]: do_command 'perf schema' '{prefix=perf schema}'
Nov 23 21:28:08 compute-1 ceph-osd[77613]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:16.364059+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:17.364209+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:18.364327+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:19.364441+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:20.364621+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:21.364738+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:22.364884+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:23.365021+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:24.365132+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:25.365255+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:26.365399+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:27.365553+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:28.365670+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:29.365799+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:30.365910+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:31.366066+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:32.366190+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:33.366324+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:34.366457+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:35.366583+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:36.366703+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:37.366829+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:38.366957+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:39.367165+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:40.367338+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:41.367593+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:42.374269+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:43.374390+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:44.374571+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:45.374960+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:46.375158+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:47.375426+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:48.375665+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:49.375956+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:50.376283+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:51.376547+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:52.376775+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:53.376986+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:54.377147+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:55.377343+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:56.377531+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:57.377711+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:58.377917+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:18:59.378139+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:00.378331+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:01.378515+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:02.378675+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:03.378799+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:04.379491+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:05.379654+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:06.379841+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:07.380018+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:08.380161+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:09.380291+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:10.380491+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:11.380661+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:12.380941+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:13.381159+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:14.381355+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:15.381497+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:16.381641+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:17.381792+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:18.381911+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:19.382028+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:20.382216+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:21.382366+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:22.382765+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:23.382902+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:24.383040+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:25.383227+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:26.383365+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:27.383513+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:28.383681+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:29.383830+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:30.383996+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:31.384154+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:32.384354+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:33.384514+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:34.384691+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:35.384845+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:36.385293+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:37.385403+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:38.385623+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:39.385765+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:40.385916+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:41.386026+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:42.386153+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:43.386274+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:44.386438+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:45.386608+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:46.386764+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:47.386930+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:48.387103+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:49.387227+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:50.387384+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:51.387518+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:52.387680+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:53.387839+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:54.388045+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:55.388229+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:56.388412+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:57.388562+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:58.388735+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:19:59.388939+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:00.389128+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:01.389264+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:02.389417+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:03.389528+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:04.389651+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 13K writes, 50K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 3842 syncs, 3.51 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2001 writes, 6920 keys, 2001 commit groups, 1.0 writes per commit group, ingest: 6.50 MB, 0.01 MB/s
                                           Interval WAL: 2001 writes, 838 syncs, 2.39 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:05.389766+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:06.389908+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:07.390151+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:08.390293+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:09.390430+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:10.390595+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:11.390700+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:12.390937+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:13.391082+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:14.391270+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:15.391433+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:16.391579+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:17.391704+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:18.391859+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:19.392096+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:20.392265+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:21.392390+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:22.392589+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:23.392692+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:24.392813+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:25.392947+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:26.393085+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:27.393231+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:28.393362+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:29.393452+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:30.393631+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:31.393763+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:32.393910+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:33.394048+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:34.394183+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:35.394298+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 45752320 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:36.394433+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 45752320 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:37.394566+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 45752320 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:38.394665+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 45752320 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:39.394785+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 45752320 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:40.394920+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 45752320 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:41.395050+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 45752320 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:42.395195+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 45752320 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:43.395314+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 45752320 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:44.395474+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 45752320 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:45.395686+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 45752320 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:46.395909+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:47.396115+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:48.396288+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:49.396478+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:50.396675+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:51.396830+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:52.396953+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:53.397103+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:54.397261+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:55.397432+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:56.397597+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:57.397745+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:58.397934+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:20:59.398107+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:00.398300+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:01.398465+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:02.398667+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:03.398831+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:04.399116+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:05.399331+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:06.399485+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:07.399622+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:08.399791+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:09.399975+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:10.400161+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:11.400290+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:12.400415+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:13.400551+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:14.400727+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:15.400944+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:16.401066+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:17.401213+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:18.401405+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:19.401545+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:20.401765+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:21.401923+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:22.402113+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:23.402272+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:24.402468+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:25.402609+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:26.402749+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:27.402954+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:28.403115+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:29.403278+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:30.403471+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:31.403644+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:32.403780+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:33.403982+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:34.404155+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:35.404337+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:36.404492+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:37.404623+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:38.404780+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:39.404951+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:40.405111+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:41.405250+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:42.405387+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:43.405646+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:44.405845+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:45.406021+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:46.406147+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:47.406287+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:48.406440+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:49.406571+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:50.406723+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:51.406915+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:52.407158+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:53.407365+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:54.407552+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 45711360 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:55.407694+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 45711360 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:56.407913+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 45711360 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:57.408114+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 45711360 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:58.408388+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 45711360 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:21:59.408558+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 45711360 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:00.408752+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 45711360 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:01.408907+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 45711360 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:02.409090+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 45711360 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:03.409258+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 45711360 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:04.409405+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 45711360 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:05.409555+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 45711360 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:06.409775+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 45703168 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:07.409926+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 45703168 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:08.410072+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 45703168 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:09.410221+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 45703168 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:10.410401+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 45703168 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:11.410542+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 45703168 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:12.410721+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 45703168 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:13.410916+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 45703168 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:14.411039+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 316.868835449s of 316.921936035s, submitted: 19
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 45694976 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:15.411172+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284700 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 45670400 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:16.411310+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 122126336 unmapped: 44425216 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:17.411439+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [1,0,0,1])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123273216 unmapped: 43278336 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:18.411607+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123273216 unmapped: 43278336 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:19.411761+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123273216 unmapped: 43278336 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:20.411961+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123273216 unmapped: 43278336 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:21.412097+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123273216 unmapped: 43278336 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:22.412209+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123273216 unmapped: 43278336 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:23.412332+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123273216 unmapped: 43278336 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:24.412466+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123273216 unmapped: 43278336 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:25.412613+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123273216 unmapped: 43278336 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:26.412765+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123281408 unmapped: 43270144 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:27.412938+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123289600 unmapped: 43261952 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:28.413077+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123289600 unmapped: 43261952 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:29.413255+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123289600 unmapped: 43261952 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:30.413947+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:31.414332+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123289600 unmapped: 43261952 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:32.414818+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123289600 unmapped: 43261952 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:33.414960+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123289600 unmapped: 43261952 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:34.415151+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123289600 unmapped: 43261952 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:35.415839+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123297792 unmapped: 43253760 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:36.416418+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123297792 unmapped: 43253760 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:37.416925+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 43245568 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:38.417327+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 43245568 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:39.417650+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 43245568 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:40.417949+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 43245568 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:41.418159+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 43245568 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:42.418366+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 43245568 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:43.418568+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 43245568 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:44.418794+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805b24c400 session 0x55805c453680
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fc400
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 43245568 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:45.418986+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 43237376 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:46.419162+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 43237376 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:47.419307+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 43237376 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:48.419462+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 43237376 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:49.419610+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 43237376 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:50.419922+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 43237376 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:51.420103+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 43229184 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:52.420331+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 43229184 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:53.420671+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 43229184 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:54.420959+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 43229184 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:55.421164+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 43229184 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:56.421429+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 43220992 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:57.421610+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 43220992 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:58.421973+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 43220992 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:22:59.422187+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 43220992 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:00.422394+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 43220992 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:01.422565+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 43220992 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:02.422705+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 43220992 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:03.422856+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 43220992 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:04.423131+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 43220992 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:05.423308+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 43220992 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:06.423523+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 43220992 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:07.423648+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 43220992 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:08.423781+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 43220992 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:09.423928+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:10.424067+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:11.424193+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:12.424333+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:13.424447+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:14.424589+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:15.424746+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:16.424927+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:17.425066+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:18.425137+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:19.425249+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:20.425361+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:21.425509+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:22.425636+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:23.425805+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:24.426003+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:25.426197+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:26.426340+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:27.426505+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:28.426661+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:29.426767+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:30.426930+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:31.427067+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:32.427272+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:33.427403+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:34.427561+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:35.427694+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:36.427839+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:37.428085+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:38.428262+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:39.428439+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:40.428597+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:41.428755+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:42.428914+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:43.429063+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:44.429205+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:45.429411+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:46.429593+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:47.429773+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:48.429970+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:49.430160+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:50.430360+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:51.430486+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:52.430807+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:53.430989+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:54.431148+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:55.431317+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:56.431462+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:57.431613+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:58.431781+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:23:59.431935+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:00.432101+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:01.432265+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:02.432460+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:03.432623+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:04.432806+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:05.433001+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:06.433192+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:07.433351+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:08.433486+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:09.433700+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:10.433894+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:11.434052+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:12.434167+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:13.434329+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:14.434459+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:15.434626+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:16.434830+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:17.434991+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:18.435125+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:19.435265+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:20.435448+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:21.435633+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:22.435800+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:23.435947+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:24.436110+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:25.436367+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:26.436549+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:27.436699+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:28.436943+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:29.437101+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:30.437333+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:31.437472+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:32.437658+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:33.437854+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:34.438042+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:35.438199+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:36.438341+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:37.438494+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:38.438638+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:39.438791+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:40.438985+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:41.439096+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:42.439227+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:43.439529+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:44.439731+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:45.439942+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:46.440114+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:47.440305+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:48.440445+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:49.440634+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:50.440852+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:51.441050+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:52.441220+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:53.441355+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:54.441524+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:55.441707+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:56.441937+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:57.442154+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:58.442382+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:24:59.442582+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:00.442814+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:01.443010+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:02.443199+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:03.443352+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:04.443534+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets getting new tickets!
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:05.443748+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _finish_auth 0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:05.445191+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:06.443928+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:07.444086+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:08.444233+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:09.444456+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:10.444647+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:11.444834+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:12.445525+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:13.446010+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:14.446461+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:15.446831+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:16.447157+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:17.447451+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:18.447681+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:19.448111+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:20.448387+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:21.448518+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:22.448770+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 43171840 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:23.448956+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 43171840 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:24.449099+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 43171840 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:25.449212+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 43171840 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:26.449396+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 43171840 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:27.449718+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 43171840 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:28.449977+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 43171840 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:29.450133+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 43171840 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:30.450408+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 43171840 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:31.450556+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:32.450763+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:33.450949+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:34.451161+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:35.451374+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:36.451548+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:37.451777+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:38.452034+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:39.452259+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:40.452515+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:41.452678+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:42.452846+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:43.452995+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:44.453138+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:45.453263+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:46.453417+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:47.453528+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:48.453649+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:49.453758+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:50.453975+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:51.454158+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:52.454303+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:53.454435+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:54.454602+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:55.454748+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:56.455274+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:57.455400+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:58.455615+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:25:59.455799+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:00.456004+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:01.456172+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:02.456354+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:03.456743+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:04.456960+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:05.457135+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:06.457287+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:07.457476+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:08.688251+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:09.688398+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:10.688567+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:11.688722+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:12.688941+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:13.689070+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:14.689271+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:15.689404+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:16.689509+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:17.689630+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:18.689767+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:19.689912+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:20.690089+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:21.690246+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:22.690385+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:23.690553+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:24.690712+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:25.690919+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:26.691049+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:27.691235+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:28.691385+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:29.691571+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:30.691806+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:31.691974+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:32.692106+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:33.692258+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:34.692436+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:35.692573+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:36.692736+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:37.692938+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:38.693163+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:39.693331+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:40.693523+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:41.693678+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:42.693810+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:43.693940+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:44.694062+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:45.694204+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:46.694329+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:47.694472+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:48.694617+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:49.694820+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:50.695078+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:51.695302+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:52.695563+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:53.695732+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:54.695934+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:55.696187+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:56.696357+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:57.696528+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:58.696772+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:26:59.697021+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:00.697206+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:01.697374+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:02.697560+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:03.697724+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:04.697859+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:05.698004+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:06.698236+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:07.698432+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:08.698609+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:09.698810+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:10.699026+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:11.699151+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:12.699309+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:13.699445+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:14.699615+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805c4532c0
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: handle_auth_request added challenge on 0x55805e6fdc00
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 43114496 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:15.699747+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 43114496 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:16.699921+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 43114496 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:17.700067+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 43114496 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:18.700261+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 43114496 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:19.700463+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 43114496 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:20.700651+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 43114496 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:21.700837+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 43114496 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:22.701031+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 43114496 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:23.701154+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 43114496 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:24.701271+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 43114496 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:25.701388+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 43114496 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:26.701544+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 43106304 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:27.701681+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 43106304 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:28.701821+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 43106304 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:29.701988+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 43106304 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:30.702126+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 43106304 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:31.702261+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:32.702918+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 43106304 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:33.703055+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 43106304 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:34.703172+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 43106304 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:35.703285+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 43106304 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: do_command 'config diff' '{prefix=config diff}'
Nov 23 21:28:08 compute-1 ceph-osd[77613]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 23 21:28:08 compute-1 ceph-osd[77613]: do_command 'config show' '{prefix=config show}'
Nov 23 21:28:08 compute-1 ceph-osd[77613]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 23 21:28:08 compute-1 ceph-osd[77613]: do_command 'counter dump' '{prefix=counter dump}'
Nov 23 21:28:08 compute-1 ceph-osd[77613]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 23 21:28:08 compute-1 ceph-osd[77613]: do_command 'counter schema' '{prefix=counter schema}'
Nov 23 21:28:08 compute-1 ceph-osd[77613]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:36.703403+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123265024 unmapped: 43286528 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 21:28:08 compute-1 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 21:28:08 compute-1 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: tick
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_tickets
Nov 23 21:28:08 compute-1 ceph-osd[77613]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-23T21:27:37.703552+0000)
Nov 23 21:28:08 compute-1 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123215872 unmapped: 43335680 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 21:28:08 compute-1 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 21:28:08 compute-1 ceph-osd[77613]: do_command 'log dump' '{prefix=log dump}'
Nov 23 21:28:08 compute-1 ceph-mon[80135]: from='client.28340 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:08 compute-1 ceph-mon[80135]: from='client.18336 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:08 compute-1 ceph-mon[80135]: from='client.27166 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2164303278' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 23 21:28:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/581907728' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 21:28:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.10:0/581907728' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 21:28:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1316302909' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 23 21:28:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1285252654' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 23 21:28:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3896004946' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 23 21:28:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/18556835' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 23 21:28:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/973628488' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 23 21:28:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/902183333' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 23 21:28:08 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/351794627' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 23 21:28:09 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 23 21:28:09 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1842955223' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 21:28:09 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:28:09 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:28:09 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:09.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:28:09 compute-1 nova_compute[230183]: 2025-11-23 21:28:09.418 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:28:09 compute-1 nova_compute[230183]: 2025-11-23 21:28:09.420 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 23 21:28:09 compute-1 nova_compute[230183]: 2025-11-23 21:28:09.420 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 23 21:28:09 compute-1 nova_compute[230183]: 2025-11-23 21:28:09.420 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:28:09 compute-1 nova_compute[230183]: 2025-11-23 21:28:09.450 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:28:09 compute-1 nova_compute[230183]: 2025-11-23 21:28:09.451 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 23 21:28:09 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Nov 23 21:28:09 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2176844904' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 23 21:28:10 compute-1 ceph-mon[80135]: from='client.28361 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:10 compute-1 ceph-mon[80135]: from='client.18357 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:10 compute-1 ceph-mon[80135]: from='client.28391 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:10 compute-1 ceph-mon[80135]: from='client.18372 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:10 compute-1 ceph-mon[80135]: from='client.28415 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:10 compute-1 ceph-mon[80135]: pgmap v1409: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:28:10 compute-1 ceph-mon[80135]: from='client.18405 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:10 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1842955223' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 21:28:10 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/987271107' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 21:28:10 compute-1 ceph-mon[80135]: from='client.28433 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:10 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2618177895' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 23 21:28:10 compute-1 ceph-mon[80135]: from='client.27226 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:10 compute-1 ceph-mon[80135]: from='client.18420 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:10 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/213428320' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 23 21:28:10 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3923006443' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 23 21:28:10 compute-1 ceph-mon[80135]: from='client.28451 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:10 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3599506863' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 23 21:28:10 compute-1 ceph-mon[80135]: from='client.27241 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:10 compute-1 ceph-mon[80135]: from='client.18441 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:10 compute-1 crontab[257905]: (root) LIST (root)
Nov 23 21:28:10 compute-1 nova_compute[230183]: 2025-11-23 21:28:10.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:28:10 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:28:10 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:28:10 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:10.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:28:10 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Nov 23 21:28:10 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3372307337' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 23 21:28:11 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2176844904' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 23 21:28:11 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1602941432' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 23 21:28:11 compute-1 ceph-mon[80135]: from='client.28472 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:11 compute-1 ceph-mon[80135]: from='client.27253 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:11 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3672847451' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 23 21:28:11 compute-1 ceph-mon[80135]: from='client.28478 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:11 compute-1 ceph-mon[80135]: from='client.28493 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:11 compute-1 ceph-mon[80135]: from='client.27268 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:11 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1372842905' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 23 21:28:11 compute-1 ceph-mon[80135]: from='client.18480 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:11 compute-1 ceph-mon[80135]: from='client.28505 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:11 compute-1 ceph-mon[80135]: pgmap v1410: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:28:11 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2055775712' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 23 21:28:11 compute-1 ceph-mon[80135]: from='client.27286 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:11 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3372307337' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 23 21:28:11 compute-1 sudo[258041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 23 21:28:11 compute-1 sudo[258041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 23 21:28:11 compute-1 sudo[258041]: pam_unix(sudo:session): session closed for user root
Nov 23 21:28:11 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Nov 23 21:28:11 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1833908746' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 23 21:28:11 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:28:11 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:28:11 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:11.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:28:11 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Nov 23 21:28:11 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/706782067' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 23 21:28:11 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Nov 23 21:28:11 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2668548967' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 23 21:28:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Nov 23 21:28:12 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/831638846' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 23 21:28:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Nov 23 21:28:12 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3240368654' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 23 21:28:12 compute-1 ceph-mon[80135]: from='client.18504 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:12 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/313308784' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 21:28:12 compute-1 ceph-mon[80135]: from='client.28526 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:12 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1833908746' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 23 21:28:12 compute-1 ceph-mon[80135]: from='client.27304 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:12 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2685656778' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 23 21:28:12 compute-1 ceph-mon[80135]: from='client.18528 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:12 compute-1 ceph-mon[80135]: from='client.28547 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:12 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1549457199' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 23 21:28:12 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/706782067' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 23 21:28:12 compute-1 ceph-mon[80135]: from='client.27316 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:12 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2668548967' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 23 21:28:12 compute-1 ceph-mon[80135]: from='client.18558 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:12 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/831638846' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 23 21:28:12 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3240368654' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 23 21:28:12 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2730715203' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 23 21:28:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Nov 23 21:28:12 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2960342545' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 23 21:28:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:28:12 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:28:12 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:28:12 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:12.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:28:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Nov 23 21:28:12 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1601245962' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 23 21:28:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Nov 23 21:28:12 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1175955785' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 23 21:28:12 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Nov 23 21:28:12 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2377348041' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 23 21:28:13 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Nov 23 21:28:13 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3375474678' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 23 21:28:13 compute-1 ceph-mon[80135]: from='client.27334 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:13 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/520962965' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 23 21:28:13 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2519341427' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 23 21:28:13 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2960342545' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 23 21:28:13 compute-1 ceph-mon[80135]: from='client.27346 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:13 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1601245962' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 23 21:28:13 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/811161713' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 23 21:28:13 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1175955785' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 23 21:28:13 compute-1 ceph-mon[80135]: pgmap v1411: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:28:13 compute-1 ceph-mon[80135]: from='client.27355 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:13 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2737878222' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 23 21:28:13 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2377348041' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 23 21:28:13 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3415009304' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 23 21:28:13 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3843252747' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 23 21:28:13 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3375474678' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 23 21:28:13 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:28:13 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:28:13 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:13.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:28:13 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Nov 23 21:28:13 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1180863002' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 23 21:28:13 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 23 21:28:13 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/660462273' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 23 21:28:13 compute-1 systemd[1]: Starting Hostname Service...
Nov 23 21:28:13 compute-1 systemd[1]: Started Hostname Service.
Nov 23 21:28:13 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Nov 23 21:28:13 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/824463432' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 23 21:28:13 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Nov 23 21:28:13 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4029995768' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 23 21:28:13 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Nov 23 21:28:13 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/890621600' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 23 21:28:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2074116635' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 23 21:28:14 compute-1 ceph-mon[80135]: from='client.27367 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1261971723' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 23 21:28:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1180863002' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 23 21:28:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3737923499' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 23 21:28:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/660462273' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 23 21:28:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/824463432' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 23 21:28:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2065148221' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 23 21:28:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/4029995768' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 23 21:28:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/219328293' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 23 21:28:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1393218024' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 23 21:28:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/890621600' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 23 21:28:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/608735071' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 23 21:28:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/3321771611' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 23 21:28:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/4077139984' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 23 21:28:14 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2956273160' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 23 21:28:14 compute-1 nova_compute[230183]: 2025-11-23 21:28:14.451 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:28:14 compute-1 nova_compute[230183]: 2025-11-23 21:28:14.452 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:28:14 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:28:14 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:28:14 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:14.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:28:14 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Nov 23 21:28:14 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/191927864' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 23 21:28:14 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Nov 23 21:28:14 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/65066657' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 23 21:28:15 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2582421277' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 23 21:28:15 compute-1 ceph-mon[80135]: from='client.28721 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:15 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/191927864' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 23 21:28:15 compute-1 ceph-mon[80135]: from='client.18717 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:15 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2697393913' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 23 21:28:15 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/869848352' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 23 21:28:15 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/65066657' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 23 21:28:15 compute-1 ceph-mon[80135]: pgmap v1412: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 23 21:28:15 compute-1 ceph-mon[80135]: from='client.28739 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:15 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2027232004' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 23 21:28:15 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2596138598' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 23 21:28:15 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2808481883' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 23 21:28:15 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:28:15 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:28:15 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:15.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:28:15 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Nov 23 21:28:15 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3172043641' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 23 21:28:15 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 23 21:28:15 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1750229248' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 23 21:28:16 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0)
Nov 23 21:28:16 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4188482623' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 23 21:28:16 compute-1 ceph-mon[80135]: from='client.18744 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:16 compute-1 ceph-mon[80135]: from='client.28751 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:16 compute-1 ceph-mon[80135]: from='client.18759 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:16 compute-1 ceph-mon[80135]: from='client.28769 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:16 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2971409743' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 23 21:28:16 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2531662656' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 23 21:28:16 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/3172043641' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 23 21:28:16 compute-1 ceph-mon[80135]: from='client.18777 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:16 compute-1 ceph-mon[80135]: from='client.28790 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:16 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1104105981' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 23 21:28:16 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1750229248' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 23 21:28:16 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3758994834' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 23 21:28:16 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/4188482623' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 23 21:28:16 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2580969354' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 23 21:28:16 compute-1 nova_compute[230183]: 2025-11-23 21:28:16.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:28:16 compute-1 nova_compute[230183]: 2025-11-23 21:28:16.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:28:16 compute-1 nova_compute[230183]: 2025-11-23 21:28:16.456 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:28:16 compute-1 nova_compute[230183]: 2025-11-23 21:28:16.457 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:28:16 compute-1 nova_compute[230183]: 2025-11-23 21:28:16.457 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:28:16 compute-1 nova_compute[230183]: 2025-11-23 21:28:16.457 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 23 21:28:16 compute-1 nova_compute[230183]: 2025-11-23 21:28:16.457 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:28:16 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:28:16 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:28:16 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:16.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:28:16 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Nov 23 21:28:16 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2146727687' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 23 21:28:16 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:28:16 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2379998528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:28:16 compute-1 nova_compute[230183]: 2025-11-23 21:28:16.901 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:28:16 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Nov 23 21:28:16 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/241993972' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 23 21:28:17 compute-1 nova_compute[230183]: 2025-11-23 21:28:17.042 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 23 21:28:17 compute-1 nova_compute[230183]: 2025-11-23 21:28:17.044 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4526MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 23 21:28:17 compute-1 nova_compute[230183]: 2025-11-23 21:28:17.044 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 23 21:28:17 compute-1 nova_compute[230183]: 2025-11-23 21:28:17.044 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 23 21:28:17 compute-1 nova_compute[230183]: 2025-11-23 21:28:17.101 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 23 21:28:17 compute-1 nova_compute[230183]: 2025-11-23 21:28:17.102 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 23 21:28:17 compute-1 nova_compute[230183]: 2025-11-23 21:28:17.122 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 23 21:28:17 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:28:17 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 21:28:17 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:17.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 21:28:17 compute-1 ceph-mon[80135]: from='client.18807 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:17 compute-1 ceph-mon[80135]: from='client.28817 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:17 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2172058909' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 23 21:28:17 compute-1 ceph-mon[80135]: from='client.18831 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:17 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/3086242074' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 23 21:28:17 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2146727687' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 23 21:28:17 compute-1 ceph-mon[80135]: from='client.28835 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:17 compute-1 ceph-mon[80135]: from='client.27490 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:17 compute-1 ceph-mon[80135]: pgmap v1413: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:28:17 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/913007999' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 23 21:28:17 compute-1 ceph-mon[80135]: from='client.18852 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:17 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/2379998528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:28:17 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/241993972' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 23 21:28:17 compute-1 ceph-mon[80135]: from='client.27502 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:17 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 21:28:17 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 21:28:17 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/568664151' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 23 21:28:17 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 21:28:17 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 21:28:17 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 21:28:17 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 21:28:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 21:28:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 21:28:17 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/903892447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:28:17 compute-1 nova_compute[230183]: 2025-11-23 21:28:17.569 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 23 21:28:17 compute-1 nova_compute[230183]: 2025-11-23 21:28:17.575 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 23 21:28:17 compute-1 nova_compute[230183]: 2025-11-23 21:28:17.591 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 23 21:28:17 compute-1 nova_compute[230183]: 2025-11-23 21:28:17.593 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 23 21:28:17 compute-1 nova_compute[230183]: 2025-11-23 21:28:17.594 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 23 21:28:17 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 21:28:17 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 21:28:17 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Nov 23 21:28:17 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/159930549' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 23 21:28:18 compute-1 ceph-mon[80135]: from='client.28865 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:18 compute-1 ceph-mon[80135]: from='client.18876 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:18 compute-1 ceph-mon[80135]: from='client.27508 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:18 compute-1 ceph-mon[80135]: from='client.27517 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:18 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 21:28:18 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 21:28:18 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 21:28:18 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 21:28:18 compute-1 ceph-mon[80135]: from='client.28892 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:18 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/903892447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 21:28:18 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 21:28:18 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 21:28:18 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 21:28:18 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 21:28:18 compute-1 ceph-mon[80135]: from='client.18915 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:18 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/159930549' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 23 21:28:18 compute-1 ceph-mon[80135]: from='client.27541 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:18 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/1898535550' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 23 21:28:18 compute-1 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 23 21:28:18 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/2962810161' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 23 21:28:18 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:28:18 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:28:18 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:18.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:28:18 compute-1 nova_compute[230183]: 2025-11-23 21:28:18.595 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:28:18 compute-1 nova_compute[230183]: 2025-11-23 21:28:18.595 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 23 21:28:18 compute-1 nova_compute[230183]: 2025-11-23 21:28:18.595 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 23 21:28:18 compute-1 nova_compute[230183]: 2025-11-23 21:28:18.617 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 23 21:28:18 compute-1 nova_compute[230183]: 2025-11-23 21:28:18.617 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:28:18 compute-1 nova_compute[230183]: 2025-11-23 21:28:18.617 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 23 21:28:19 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0)
Nov 23 21:28:19 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1544273329' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 23 21:28:19 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:28:19 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:28:19 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:19.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:28:19 compute-1 nova_compute[230183]: 2025-11-23 21:28:19.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:28:19 compute-1 nova_compute[230183]: 2025-11-23 21:28:19.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 23 21:28:19 compute-1 nova_compute[230183]: 2025-11-23 21:28:19.454 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 23 21:28:19 compute-1 ceph-mon[80135]: from='client.28955 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:19 compute-1 ceph-mon[80135]: from='client.27565 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:19 compute-1 ceph-mon[80135]: from='client.18954 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 21:28:19 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/4173577089' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 23 21:28:19 compute-1 ceph-mon[80135]: from='client.27577 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:19 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/1555797194' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 23 21:28:19 compute-1 ceph-mon[80135]: pgmap v1414: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 23 21:28:19 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/967447396' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 23 21:28:19 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1544273329' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 23 21:28:19 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/272507563' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 23 21:28:19 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/141405296' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 23 21:28:19 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Nov 23 21:28:19 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1486182146' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 23 21:28:19 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Nov 23 21:28:19 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4069329090' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 23 21:28:20 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 21:28:20 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 21:28:20 compute-1 ceph-mon[80135]: from='client.27592 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:20 compute-1 ceph-mon[80135]: from='client.27604 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 23 21:28:20 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/1486182146' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 23 21:28:20 compute-1 ceph-mon[80135]: from='client.? 192.168.122.102:0/126716860' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 23 21:28:20 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2668592079' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 23 21:28:20 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 21:28:20 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 21:28:20 compute-1 ceph-mon[80135]: from='client.? 192.168.122.101:0/4069329090' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 23 21:28:20 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 21:28:20 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 21:28:20 compute-1 ceph-mon[80135]: from='client.? 192.168.122.100:0/2903969529' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 23 21:28:20 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 21:28:20 compute-1 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 21:28:20 compute-1 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 21:28:20 compute-1 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 21:28:20 compute-1 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:20.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 21:28:20 compute-1 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Nov 23 21:28:20 compute-1 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1084746240' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
